Read

foregoinggowpenSoftware and s/w Development

Nov 4, 2013 (3 years and 9 months ago)

111 views

Top10SQLPerformanceTips

[
edit
] Top 1000 SQL Performance Tips

Interactive session from MySQL Camp I:


Specific Query Performance Tips (see also database design tips for tips on indexes):

1.

Use EXPLAIN to profile the query execution plan

2.

Use
Slow Query Log

(always have it on!)

3.

Don't use DISTINCT when you have or could use GROUP BY

4.

Insert performance

1.

Batch INSERT and REPLACE

2.

Use LOAD DATA instead of INSERT

5.

LIMIT m,n may not be as fast as it sounds

6.

Don't use ORDER BY
RAND() if you have > ~2K records

7.

Use SQL_NO_CACHE when you are SELECTing frequently updated data or large sets
of data

8.

Avoid wildcards at the start of LIKE queries

9.

Avoid correlated subqueries and in select and where clause (try to avoid in)

10.

No calculat
ed comparisons
--

isolate indexed columns

11.

ORDER BY and LIMIT work best with equalities and covered indexes

12.

Separate text/blobs from metadata, don't put text/blobs in results if you don't need them

13.

Derived tables (subqueries in the FROM clause) can be us
eful for retrieving BLOBs
without sorting them. (Self
-
join can speed up a query if 1st part finds the IDs and uses
then to fetch the rest)

14.

ALTER TABLE...ORDER BY can take data sorted chronologically and re
-
order it by a
different field
--

this can make qu
eries on that field run faster (maybe this goes in
indexing?)

15.

Know when to split a complex query and join smaller ones

16.

Delete small amounts at a time if you can

17.

Make similar queries consistent so cache is used

18.

Have good SQL query standards

19.

Don't use deprecated features

20.

Turning OR on multiple index fields (<5.0) into UNION may speed things up (with
LIMIT), after 5.0 the index_merge should pick stuff up.

21.

Don't use COUNT * on Innodb tables for every search, do it a few times and/or summary
ta
bles, or if you need it for the total # of rows, use SQL_CALC_FOUND_ROWS and
SELECT FOUND_ROWS()

22.

Use INSERT ... ON DUPLICATE KEY update (INSERT IGNORE) to avoid having to
SELECT

23.

use groupwise maximum instead of subqueries

24.

Avoid using IN(...) when select
ing on indexed fields, It will kill the performance of
SELECT query.


Scaling Performance Tips:

1.

Use benchmarking

2.

isolate workloads don't let administrative work interfere with customer performance. (ie
backups)

3.

Debugging sucks, testing rocks!

4.

As your
data grows, indexing may change (cardinality and selectivity change).
Structuring may want to change. Make your schema as modular as your code. Make your
code able to scale. Plan and embrace change, and get developers to do the same.

Network Performance T
ips:

1.

Minimize traffic by fetching only what you need.

1.

Paging/chunked data retrieval to limit

2.

Don't use SELECT *

3.

Be wary of lots of small quick queries if a longer query can be more efficient

2.

Use multi_query if appropriate to reduce round
-
trips

3.

Use stored procedures to avoid bandwidth wastage

OS Performance Tips:

1.

Use proper data partitions

1.

For Cluster. Start thinking about Cluster *before* you need them

2.

Keep the database host as clean as possible. Do you really need a windowing system on
that

server?

3.

Utilize the strengths of the OS

4.

pare down cron scripts

5.

create a test environment

6.

source control schema and config files

7.

for LVM innodb backups, restore to a different instance of MySQL so Innodb can roll
forward

8.

partition appropriately

9.

part
ition your database when you have real data
--

do not assume you know your dataset
until you have real data

MySQL Server Overall Tips:

1.

innodb_flush_commit=0 can help slave lag

2.

Optimize for data types, use consistent data types. Use PROCEDURE ANALYSE() t
o
help determine the smallest data type for your needs.

3.

use optimistic locking, not pessimistic locking. try to use shared lock, not exclusive lock.
share mode vs. FOR UPDATE

4.

if you can, compress text/blobs

5.

compress static data

6.

don't back up static data as often

7.

enable and increase the query and buffer caches if appropriate

8.

config params
--

http://docs.cellblue.nl/2007/03/17/easy
-
mysql
-
performance
-
tweaks/

is a
good reference

9.

Config variables & tips:

1.

use one of the supplied config files

2.

key_buffer, unix cache (leave some RAM free), per
-
connection variables, innodb
memory varia
bles

3.

be aware of global vs. per
-
connection variables

4.

check SHOW STATUS and SHOW VARIABLES (GLOBAL|SESSION in 5.0
and up)

5.

be aware of swapping esp. with Linux, "swappiness" (bypass OS filecache for
innodb data files, innodb_flush_method=O_DIRECT if possi
ble (this is also OS
specific))

6.

defragment tables, rebuild indexes, do table maintenance

7.

If you use innodb_flush_txn_commit=1, use a battery
-
backed hardware cache
write controller

8.

more RAM is good so faster disk speed

9.

use 64
-
bit architectures

10.

--
skip
-
n
ame
-
resolve

11.

increase myisam_sort_buffer_size to optimize large inserts (this is a per
-
connection
variable)

12.

look up memory tuning parameter for on
-
insert caching

13.

increase temp table size in a data warehousing environment (default is 32Mb) so it
doesn't w
rite to disk (also constrained by max_heap_table_size, default 16Mb)

14.

Run in SQL_MODE=STRICT to help identify warnings

15.

/tmp dir on battery
-
backed write cache

16.

consider battery
-
backed RAM for innodb logfiles

17.

use
--
safe
-
updates for client

18.

Redundant data i
s redundant

Storage Engine Performance Tips:

1.

InnoDB ALWAYS keeps the primary key as part of each index, so do not make the
primary key very large

2.

Utilize different storage engines on master/slave ie, if you need fulltext indexing on a
table.

3.

BLACKHOLE engine and replication is much faster than FEDERATED tables for things
like logs.

4.

Know your storage engines and what performs best for your needs, know that different
ones exist.

1.

ie, use MERGE tables ARCHIVE tables for logs

2.

Archive old data
--

don't be a pack
-
rat! 2 common engines for this are ARCHIVE
tables and MERGE tables

5.

use row
-
level instead of table
-
level locking for OLTP workloads

6.

try out a few schemas and storage engines in your test environment before picking one.

Database Design Pe
rformance Tips:

1.

Design sane query schemas. don't be afraid of table joins, often they are faster than
denormalization

2.

Don't use boolean flags

3.

Use Indexes

4.

Don't Index Everything

5.

Do not duplicate indexes

6.

Do not use large columns in indexes if the ratio

of SELECTs:INSERTs is low.

7.

be careful of redundant columns in an index or across indexes

8.

Use a clever key and ORDER BY instead of MAX

9.

Normalize first, and denormalize where appropriate.

10.

Databases are not spreadsheets, even though Access really really
looks like one. Then
again, Access isn't a real database

11.

use INET_ATON and INET_NTOA for IP addresses, not char or varchar

12.

make it a habit to REVERSE() email addresses, so you can easily search domains (this
will help avoid wildcards at the start of LIKE queries if you want to find everyone whose
e
-
mail is in a certain domain)

13.

A NULL data type can take more room to store than

NOT NULL

14.

Choose appropriate character sets & collations
--

UTF16 will store each character in 2
bytes, whether it needs it or not, latin1 is faster than UTF8.

15.

Use Triggers wisely

16.

use min_rows and max_rows to specify approximate data size so space can b
e pre
-
allocated and reference points can be calculated.

17.

Use HASH indexing for indexing across columns with similar data prefixes

18.

Use myisam_pack_keys for int data

19.

be able to change your schema without ruining functionality of your code

20.

segregate tables
/databases that benefit from different configuration variables

Other:

1.

Hire a MySQL (tm) Certified DBA

2.

Know that there are many consulting companies out there that can help, as well as
MySQL's Professional Services.

3.

Read and post to MySQL Planet at
http://www.planetmysql.org


4.

Attend the yearly MySQL Conference and Expo or other conferences with MySQL tracks
(link to the conference here)

5.

Support your local User Group (link to forge page w/user groups here)

[
edit
] Authored by

Jay Pipes, Sheeri Kritzer, Bill

Karwin,
Ronald Bradford
, Farhan "Frank Mash" Mashraqi, Taso
Du Val, Ron Hu, Klinton Lee, Rick James, Alan Kasindorf, Eric Bergen, Kaj Arno, Joel
Seligstein, Amy Lee,
Sameer Shelavale










32 Tips To Speed Up Your MySQL Queries

Tue, 2009
-
02
-
03 10:11 | 23 comments


If you are interested in how to create fast MySQL queries, this article is for you

1.

Use persistent connections to the database to avoid connection overhead.

2.

Check all tables have PRIMARY KEYs on columns with high cardinality (many rows
match the key value). Well,`gender` column has low cardinality (selectivity), unique user
id column has high one and is a good candidate to become a primary key.

3.

All references between different tables should usually be done with indices (which also
means they must have identical data types so that joins based on the corresponding
columns will be faster). Also check that fields that you often need to search in (app
ear
frequently in WHERE, ORDER BY or GROUP BY clauses) have indices, but don’t add
too many: the worst thing you can do is to add an index on every column of a table (I
haven’t seen a table with more than 5 indices for a table, even 20
-
30 columns big). If
you
never refer to a column in comparisons, there’s no need to index it.

4.

Using simpler permissions when you issue GRANT statements enables MySQL to reduce
permission
-
checking overhead when clients execute statements.

5.

Use less RAM per row by declaring colum
ns only as large as they need to be to hold the
values stored in them.

6.

Use leftmost index prefix


in MySQL you can define index on several columns so that
left part of that index can be used a separate one so that you need less indices.

7.

When your index co
nsists of many columns, why not to create a hash column which is
short, reasonably unique, and indexed? Then your query will look like:

view plain
copy to clipboard
print
?

1.

SELECT

*


2.

FROM

table


3.

W
HERE

hash_column

=

MD5(

CONCAT(col1,

col2)

)


4.

AND

col1='aaa'

AND

col2='bbb';


SELECT *
FROM table
WHERE hash_column = MD5( CONCAT(col1, col2) )
AND col1='aaa' AND col2='bbb';

8.

Consider running
ANALYZE TABLE

(or myisamchk
--
analyze from command line) on
a table after it has be
en loaded with data to help MySQL better optimize queries.

9.

Use CHAR type when possible (instead of VARCHAR, BLOB or TEXT)


when values
of a column have constant length: MD5
-
hash (32 symbols), ICAO or IATA airport code
(4 and 3 symbols), BIC bank code (3 s
ymbols), etc. Data in CHAR columns can be found
faster rather than in variable length data types columns.

10.

Don’t split a table if you just have too many columns. In accessing a row, the biggest
performance hit is the disk seek needed to find the first byte
of the row.

11.

A column must be declared as NOT NULL if it really is


thus you speed up table
traversing a bit.

12.

If you usually retrieve rows in the same order like expr1, expr2, ..., make
ALTER
TABLE ... ORDER BY expr1, expr2
, ... to optimize the table.

13.

Don’t use PHP loop to fetch rows from database one by one just because you can


use
IN instead, e.g.

view p
lain
copy to clipboard
print
?

1.

SELECT

*


2.

FROM

`table`


3.

WHERE

`id`

IN

(1,7,13,42);


SELECT *
FROM `table`
WHERE `id` IN (1,7,13,42);

14.

Use column default value, and insert only those values that differs from the default. This
reduces the query parsing time.

15.

Use
INSERT DELAYED

or
INSERT LOW_PRIORITY

(for MyISAM) to write to your
change log table. Also, if it’s MyISAM, you can
add DELAY_KEY_WRITE=1 option


this makes index updates faster because they are not flushed to disk until the table is
closed.

16.

Think of storing users sessions data (or any other non
-
critical data) in
MEMORY

table


it’s very fast.

17.

For your web application, images and other binary assets should normally be stored as
files. That is, store only a reference to the file rather than the file itself in the database.

18.

If you have to st
ore big amounts of textual data, consider using BLOB column to contain
compressed data (MySQL’s COMPRESS() seems to be slow, so gzipping at PHP side
may help) and decompressing the contents at application server side. Anyway, it must be
benchmarked.

19.

If you

often need to calculate COUNT or SUM based on information from a lot of rows
(articles rating, poll votes, user registrations count, etc.), it makes sense to create a
separate table and update the counter in real time, which is much faster. If you need to

collect statistics from huge log tables, take advantage of using a summary table instead of
scanning the entire log table every time.

20.

Don’t use
REPLACE

(which is DELETE+INSERT and wastes
ids): use
INSERT … ON
DUPLICATE KEY UPDATE

instead (i.e. it’s INSERT + UPDATE if conflict takes
place). The same technique can be used when you need first make a SELECT to find

out
if data is already in database, and then run either INSERT or UPDATE. Why to choose
yourself


rely on database side.

21.

Tune MySQL caching: allocate enough memory for the buffer (e.g. SET GLOBAL
query_cache_size

= 1000000) and
define query_cache_min_res_unit

depending on
ave
rage query resultset size.

22.

Divide complex queries into several simpler ones


they have more chances to be
cached, so will be quicker.

23.

Group several similar
INSERT
s in one long
INSERT

with multiple
VALUES

lists to insert
several rows at a time: quiry will
be quicker due to fact that connection + sending +
parsing a query takes 5
-
7 times of actual data insertion (depending on row size). If that is
not possible, use
START TRANSACTION

and
COMMI
T
, if your database is InnoDB, otherwise
use
LOCK TABLES



this benefits performance because the index buffer is flushed to disk
only once, after all
INSERT

statements have completed;
in this case unlock your tables
each 1000 rows or so to allow other threads access to the table.

24.

When loading a table from a text file, use
LOAD DATA INFILE

(or
my tool

for that),
it’s 20
-
100 times faster.

25.

Log slow queries

on your dev/beta environment and investigate them. This way you can
catch queries which execution time is high, those that don’t use indexes, and also


slow
administrative statements (like OPTIMIZE TABLE and ANALYZE TABLE)

26.

Tune your database
server parameters
: for example, increase buffers size.

27.

If you have lots of DELETEs in your application, or updates of dynamic format rows (if
you have VARCHAR, BLOB or TEXT column, the row has dynamic format) of your
MyISAM table to a longer total length (
which may split the row), schedule running
OPTIMIZE TABLE

query every weekend by crond. Thus you make the defragmentation,
which means more speed of queries. If you don’t use replic
ation, add LOCAL keyword to
make it faster.

28.

Don’t use ORDER BY RAND() to fetch several random rows. Fetch 10
-
20 entries (last
by time added or ID) and make array_random() on PHP side. There are also
other
solutions
.

29.

Consider avoiding using of HAVING clause


it’s rather slow.

30.

In most cases, a DISTINCT clause can be considered as a special case of GROUP BY; so
the
optimizations applicable to GROUP BY queries

can be also applied to queries with a
DISTINCT clause. Also, if you use DISTINCT, try to use LIMIT (MySQL stops as soon
as it finds row_count unique rows) and avoid ORDER BY (it requires a temporary table
i
n many cases).

31.

When I read “Building scalable web sites”, I found that it worth sometimes to de
-
normalise some tables (Flickr does this), i.e. duplicate some data in several tables to
avoid JOINs which are expensive. You can support data integrity with for
eign keys or
triggers.

32.

If you want to test a specific MySQL function or expression, use
BENCHMARK

function to do that.

33.

How To Speed Up MySQL: An
Introduction To Optimizing

34.

By
Stephen Jayna
, 3
rd

August 2009

35.

Although there is nothing groundbreaking in this document consider it a bringing
together of techniques for your first foray into optimization. We won'
t discuss the more
esoteric methods of squeezing the very last millisecond out of MySQL. There are a
myriad of parameters to tune: here's what you need to get right first.

36.

That hackneyed phrase 'one size does [not] fit all' is worth bearing in mind too. C
learly
the costs and benefits to your specific application must be carefully considered. Much of
the advice within should be considered the lowest common denominator for any time
sensitive MySQL based application whether it's web based or not.

37.


38.

Low Hangi
ng Fruit: Query Optimization

39.

First turn on your slow query log and make sure it's logging queries that don't use
indexes.

40.

In your my.cnf you need the following lines:

41.

log_slow_queries = /var/log/mysql/mysql
-
slow.log

42.

long_query_time = 1

43.

log
-
queries
-
not
-
using
-
indexes

44.

Unfortunately the stock MySQL distribution doesn't support logging queries that take less
than a second. However
Percona

have released a set of patches that allow you to
do this.
Certainly one second is fine to begin with; we're after the lowest hanging fruit after all.

45.

If you're a web application and any given query is taking over a second to run you should
do better, and chances are you can. If you're caching the query
with something like
memcached

perhaps it doesn't matter so much. Let's assume it does, after all some poor
user is going to be stuck waiting for that query to complete, even if it's only once in a
while.

46.

cat
/var/log/mysql/mysql
-
slow.log | more

47.

# Time: 090804 12:45:44

48.

# User@Host: stephen[stephen] @ localhost []

49.

#
Query_time: 11

Lock_time: 0 Rows_sent: 1 Rows_examined: 91687

50.

use stephen_drupal;

51.

SELECT uid,fid

52.

FROM profile_values

53.

WHERE uid NOT IN

54.


(SELECT
uid FROM profile_values WHERE fid = 13 AND value = 'no')

55.


AND value REGEXP 'ert' AND uid != 4145

56.


AND uid NOT IN

57.


(SELECT uid FROM users WHERE status = 0)

58.

ORDER BY RAND()

59.

LIMIT 1;

60.

This query was taken from a client's slow query log.


61.

Explain Yourself

62.

Firstly does your query make use of indexes? Here's how to find out how MySQL has
decided to parse and execute your query:

63.

EXPLAIN EXTENDED

64.

SELECT uid,fid

65.

FROM profile_values

66.

WHERE uid NOT IN

67.


(SELECT uid FROM profile_values WHERE fid = 13 AND value = 'n
o')

68.


AND value REGEXP 'ert' AND uid != 4145

69.


AND uid NOT IN

70.


(SELECT uid FROM users WHERE status = 0)

71.

ORDER BY RAND()

72.

LIMIT 1

73.

\
G;

74.

Use the parameter 'extended' to get any additional information MySQL

has on the query.
The '
\
G' will format the output into a more readable form.

75.

*************************** 1. row ***************************

76.


id: 1

77.


select_type: PRIMARY

78.


table: profile_values

79.


type: range

80.

possible_keys: uid

81.



key: uid

82.


key_len: 5

83.


ref: NULL

84.


rows: 91137

85.


Extra: Using where; Using temporary; Using filesort

86.

*************************** 2. row ***************************

87.


id: 3

88.


select_type: DEPENDENT SUBQUERY

89.


t
able: users

90.


type: unique_subquery

91.

possible_keys: PRIMARY

92.


key: PRIMARY

93.


key_len: 4

94.


ref: func

95.


rows: 1

96.


Extra: Using where; Full scan on NULL key

97.

*************************** 3. row ***************************

98.


id: 2

99.


select_type: DEPENDENT SUBQUERY

100.


table: profile_values

101.


type: index_subquery

102.

possible_keys: uid,fid

103.


key: uid

104.


key_len: 5

105.


ref: func

106.


rows: 6

107.


Extra: Using where; Full scan on NULL key

108.

3 rows in set, 1 warning (0.00 sec)

109.

Indexes vs. Table Scanning

110.

Something to look out for is a high number in the rows column. What should be
considered high depends on the size of your database and to some extent the nature of the
query. If you're not using an index for whatever reason MySQL is scanning through each
r
ow and stress is being placed on your disks and I/O subsystem. It's true that sometimes
it's more efficient for MySQL to scan a table rather than use an index, especially if the
table is small, however you should be wary.

111.

Memory

112.

If your database fits neat
ly into memory or resides on a solid
-
state drive then the
difference between using an index and a table scan may be less noticeable but it should
still be fixed. Having plentiful amounts of memory is always a good thing but it can serve
to mask serious und
erlying problems.

113.

Dependent Subqueries

114.

Performing a dependent subquery within an [NOT] IN clause is prone to trouble
with MySQL. Generally speaking it's better if you steer clear of them, or at least be very
wary of using them.

115.

The query above can be re
-
written without using any subqueries making vast time
savings. Don't be afraid to split your queries into two or more parts either. Especially if
you're using MyISAM and are worrying about lock contention you'll find that two or
three very fast queries are

preferable to one long one.

116.

Splitting Queries: Transactional Safety And ACID
Compliance

117.

Splitting your queries into parts has data consistency ramifications. On MyISAM
this may have consequences you need to consider since transactions aren't available. W
ith
InnoDB, transactions, and row
-
level locking, this is less of an issue. More on this later
however.

118.

Making Your Application Do The Work

119.

My philosophy has always been to ask the database to do as little as is reasonably
possible and move processing into

the application. The less work you give it the less
resource it needs.

120.

The fact of the matter is that it's almost always harder to scale your database
server/system than to pop in another web server. That's if you're web serving of course,
but I imagine
it might apply to any application. Web servers are relatively independent:
they don't rely on other web servers for their function in the way a database running
across multiple servers does.

121.

Neither do they get out of sync and have to catch up before they

become available
again. Or need battery
-
backed caches to remain consistent. Or highly skilled and
expensive DBAs. They're a walk in the park compared to any traditional RDBMS
frankly. And they can make much better use of all those CPUs than InnoDB or a lo
ck
-
contended MyISAM can.

122.

Okay, so it's true, this is a generalisation. It of course depends on your
application. Perl, PHP and Python are just as competent at doing somersaults with your
data as MySQL is, perhaps more so. Besides which you put yourself in

control of
potential bottlenecks and that's always worth something. You'd rather tinker with your
code base than MySQL's surely?

123.

That Query: Optimized

124.

The initial query has been broken down into three. The first two queries are run
and their results are
fed into the third. Importantly we've changed REGEXP to a LIKE
and in the second stage you'll see we remove the ORDER BY RAND() LIMIT 1
construct.

125.

The SQL_NO_CACHE ensures the queries are run without the aid of the cache
that will confuse our profiling. W
e acknowledge that the OS disk cache has been warmed
up however.


126.

That original query:


127.

SELECT SQL_NO_CACHE uid,fid

128.

FROM profile_values

129.

WHERE uid NOT IN

130.


(SELECT uid FROM profile_values WHERE fid = 13 AND value = 'no')

131.


AND value REGEXP 'ert' AND uid

!= 4145

132.


AND uid NOT IN

133.


(SELECT uid FROM users WHERE status = 0)

134.

ORDER BY RAND()

135.

LIMIT 1;

136.

1 row in set (6.01 sec)

137.

The optimized query (Stage 1):


138.

x =

139.


SELECT SQL_NO_CACHE uid

140.


FROM profile_values

141.


WHERE fid = 13 AND value = 'no';

142.


444 rows in set (
0.00 sec)

143.

y =

144.


SELECT SQL_NO_CACHE uid

145.


FROM users WHERE status = 0;

146.


464 rows in set (0.00 sec)

147.


SELECT SQL_NO_CACHE uid,fid

148.


FROM profile_values

149.


WHERE uid NOT IN (4145,{x},{y})

150.


AND value LIKE '%ert%'

151.


ORDER BY RAND()

152.


LIMIT 1;

153.


1 row in set
(0.52 sec)

154.

We can do better however. The construct "ORDER BY RAND() LIMIT 1" is less
than optimal. There are a number of ways around this but I'm happy to break it into two
queries. They will run in O(n) time rather than O(1)


that is to say the time
taken is
dependent on the size of the table



but in this case I happen to know the table size is
always going to be very modest.

155.

The optimized query (Stage 2):


156.

rows_in_set =

157.


SELECT SQL_NO_CACHE COUNT(uid)

158.


FROM profile_values

159.


WHERE uid NOT IN (414
5,{x},{y})

160.


AND value LIKE '%ert%'

161.


1 row in set (0.13 sec)

162.


random_number =
a random number between 1 and rows_in_set

163.


SELECT SQL_NO_CACHE uid,fid

164.


FROM profile_values

165.


WHERE uid NOT IN (4145,{x},{y})

166.


AND value LIKE '%ert%'

167.


LIMIT {random_number}
, 1

168.


1 row in set (0.13 sec)

169.

The query went from 6.01 seconds to 0.52 seconds to finally a set of queries that
took ~0.26 seconds
. Admittedly there is some extra overhead with the additional
bandwidth generated and processing the application must do to pr
oduce the third query
from the first two but, in this case at least, it's negligible.

170.

It might seem counterintuitive but decreasing the number of queries you send
MySQL doesn't necessarily equate to a faster application. Once you've bought into the
idea i
t's easier to scale web servers


and not MySQL


you might consider letting
your application handle more of the work.

171.

Even if we hadn't managed to decrease the overall computation time and had
simply moved some of it from MySQL

to your application it would still be a win. Your
web server is much better at concurrency than MySQL is.

172.

Lock Contention: To MyISAM or InnoDB

173.

Most MySQL databases out there run on MyISAM. I don't have the figures and I
wouldn't know how to go about acqu
iring them but since it's the default MySQL table
type I'd wager that it is the case.

174.

MyISAM is fine at many things and for many situations. You don't have to worry
about queries that COUNT(*) for example. It's a very useful query to perform but on
InnoDB

it isn't fast: you'll need to find an alternative. MyISAM is more efficient in terms
of disk space, in part due to it making physically smaller indexes which gives you all the
more chance of having them fit into memory. On databases where inserts and upda
tes
rarely feature it's faster. But you better make sure that's the case.

175.

mysql> show status like 'Table_locks_waited';

176.

+
--------------------
+
-------
+

177.

| Variable_name | Value |

178.

+
--------------------
+
-------
+

179.

| Table_locks_waited | 23412 |

180.

+
---------
-----------
+
-------
+

181.

A large or growing Table_locks_waited value is the sign of a serious concurrency
bottleneck.

182.

Any long running select will block an insert or update from happening. That will
in turn prevent any further selects from occurring until afte
r the queued insert or update is
complete. Your computer


with all its wonderful parallelism


will suddenly begin
working in serial, at least with respect to MySQL. It doesn't matter how many CPUs you
throw at it, only one will be used. A performance nig
htmare. MyISAM is for the most
part fantastic and probably powers much of the web but it


like everything else


has
costs and benefits that must be weighed.

183.

Replication and Sharding

184.

If you're not suffering from lock contention, your queries are using th
e
appropriate indexes (which you've made if they didn't exist) and you've optimised your
queries then perhaps you need to consider replicating or sharding your data.

185.

Replication

186.

Replication is an excellent way to distribute your database to more than one
location. It's used by some as a way to scale the number of reads that can be performed.
However the number of writes the entire system can handle is limited to that of the
weakes
t server. Replication is a very wasteful way to scale your application as exactly the
same data appears on each server: hardly an efficient use of resources. However it's very
simple to implement and thus remains a favoured method of scaling: in the first
instance.
Beyond that you'll have to think about sharding.

187.

Sharding, Splintering or Partitioning Your Data

188.

Sharding your data is a superior way to scale your application. It's efficient in the
sense that the same data doesn't appear in more than one locat
ion. Rather you split your
data into 'shards' which are placed on different disks or servers with individual resources
dedicated to each shard.

189.

In terms of implementation it is more difficult than replication, there's no question
of that. It very much depends on your data and whether there is a way to divide it
methodically. Ideally it's something that you would build into your application during
its
initial design. If you wish to scale your writes you must shard: replication will only take
you so far.















gr8gonzo

Posted on 07/27/09 at 11:02 PM

20 of 20 members found this art
icle helpful.

There are ONLY 3 ways to speed up MySQL, and everything else is simply a finer point of one
of these 3 ways. Here they are, in order of importance:


1. Optimize queries

2. Tune the MySQL configuration

3. Add more hardware



=================
===========================================

#1. Query Optimization

============================================================

The most common problem with MySQL performance is unoptimized queries. Here are some
examples of unoptimized queries:


-

Queries

that don't use indexes.

-

Queries that use SELECT *.

-

Queries that search full
-
text fields.

-

Queries that are not properly limited.

-

Queries that use ORDER BY unnecessarily.




Indexes

By far, the biggest problem queries are ones that don't use indexes

or don't use the BEST
indexes. Indexes are the key to getting the best performance out of your queries. Indexes are
basically shortcuts for MySQL
-

they work the same way as an index in a classroom textbook.
Let's say you wanted to look up all pages conta
ining "gr8gonzo." You COULD go through every
word in the book and find all the pages, but it's far faster to just flip to the index and see that
"gr8gonzo" happens to be on pages 2, 6, and 32.


Most people know how to use basic indexes, but most people don
't know how to use the BEST
indexes. A lot of queries have more than one thing in the WHERE clause, like this:

1:

2:

SELECT fields FROM mytable


WHERE field1 > 123 AND field2 = 'gr8gonzo';



Most people will have an index for
field1

and an index for
field2
. This is good, and the query
will try to make use of one of those indexes (and will be faster). But if this is a query that is run
frequently, it would be even better to have ANOTHER index that has
BOTH field1 and field2

in it. That (usually) gives
you the best query performance.


That said, you don't want to just create tons of these indexes, since each index does take a little
bit of extra work for MySQL to update whenever the table changes, and those little bits can add
up over time. You should re
ally only create these multi
-
field indexes when there are frequent,
slow queries that COULD take advantage of them. In section 2 of this article, we'll cover some
ways of having MySQL tell you what queries need a tune
-
up, but there is one way to tell
immed
iately if your query isn't using indexes...




EXPLAIN
-
ing Queries

If I wanted to see if the above query was working well, I could use EXPLAIN to do it. When you
EXPLAIN a query, you're simply asking MySQL to tell you what it WOULD do if it ran the
query f
or you. It responds with a computerish version of "Well, in order to run your query, I
would use this index. That would leave me with X rows, which I would then look at in order to
figure out which ones you wanted."


To EXPLAIN a query, all you have to do

is run the same query but put "EXPLAIN" in front of
it:

1:

2:

EXPLAIN SELECT fields FROM mytable


WHERE field1 > 123 AND field2 = 'gr8gonzo';



The result looks something like this:

1:

2:

3:

4:

5:

6:

+
----
+
-------------
+
---------
+
------
+
---------------
+
------
+
---------
+
-----
-
+
--------
+
-------------
+

| id | select_type | table | type | possible_keys | key | key_len | ref
| rows | Extra |

+
----
+
-------------
+
---------
+
------
+
---------------
+
------
+
---------
+
-----
-
+
--------
+
-------------
+

| 1 | SIMPLE | mytable | ALL | PRIMARY | NULL | NULL | NULL
| 898256 | Using where |

+
----
+
-------------
+
---------
+
------
+
---------------
+
------
+
---------
+
-----
-
+
--------
+
-------------
+

.



WHOA! At first glance, this is probably really confusing, but you can often just ignore a lot of
the information, like id, select_type, table, type, and ref. And MySQL sometimes calls indexes
"keys", so now let's look at the same result but without the e
xtra columns:

1:

2:

3:

4:

5:

+
---------------
+
------
+
---------
+
------
+
--------
+
-------------
+

| possible_keys | key | key_len | ref | rows | Extra |

+
---------------
+
------
+
---------
+
------
+
--------
+
-------------
+

| PRIMARY | NULL | NULL | NULL | 898256 | Using where |

+
---------------
+
------
+
---------
+
------
+
--------
+
-------------
+



Basically what this says is that MySQL had to go one
-
by
-
one through 898,256 rows and check
each one to see if field1 > 12
3 and if field2 = 'gr8gonzo'. That's a lot of processing to do,
especially if the final result is just a couple of rows (meaning that there are nearly 900,000 rows
that get searched uselessly). Let's try adding in an index for one of those fields:

1:

2:

AL
TER TABLE `mytable`


ADD INDEX `IDX_FIELD1` (`field1`) ;



If we re
-
run the EXPLAIN, we'll see:

1:

2:

3:

4:

5:

+
---------------
+
---------------
+
---------
+
-------
+
------
+
-------------
+

| possible_keys | key | key_len

| ref | rows | Extra |

+
---------------
+
---------------
+
---------
+
-------
+
------
+
-------------
+

| IDX_FIELD1 | IDX_FIELD1 | 5 | const | 1246 | Using where |

+
---------------
+
---------------
+
---------
+
-------
+
------
+
-------------
+



Well, now we're down to only looking at 1,246 rows. That's much better than 898 thousand, but
we can do even better. Our query uses two fields in the WHERE clause, so we can probably gain
better performance by adding in an index containing BOTH those fiel
ds:

1:

2:

ALTER TABLE `mytable`


ADD INDEX `IDX_FIELDS1_2` (`field1`, `field2`) ;



...and now re
-
run the EXPLAIN and we get.

1:

2:

3:

4:

5:

+
---------------
+
---------------
+
---------
+
-------------
+
------
+
-----------
--
+

| possible_keys

| key | key_len | ref | rows | Extra
|

+
---------------
+
---------------
+
---------
+
-------------
+
------
+
-----------
--
+

| IDX_FIELDS1_2 | IDX_FIELDS1_2 | 5 | const,const | 16 | Using
where |

+
---------------
+
--------------
-
+
---------
+
-------------
+
------
+
-----------
--
+



Voila! Now when we run the exact same query for real, we know that MySQL only has to search
through 16 rows instead of nearly 1 million. A guaranteed speed increase, and it was free!


NOTE: In the above
output, "possible_keys" will sometimes show more than one index,
indicating that there's more than one choice that could help the query run faster. However the
"chosen" index will be in the "key" field. The "ref" can give you an idea of how many fields are

involved in the index. For example, if you have an index on one field, your "ref" column will
probably just say "const" but if you have an index on two fields and both of those fields are in
the WHERE clause, then you'll probably see "const,const" in the
"ref" column.


ANOTHER NOTE: Whenever MySQL has to look at every row in the table, it's called a "table
scan." Table scans are the slowest way MySQL can look for data. When you EXPLAIN a query,
look at the "type" column
-

if it says "ALL" then MySQL is doi
ng a table scan to find your data.
If it says something else, like "range", then it is making use of an index. Occasionally, on small
tables, MySQL will do a table scan even if you have an index. This is just MySQL knowing what
is best in that situation, b
ut you normally want to avoid these. Here's a link to the MySQL
documentation on avoiding table scans:

http://dev.mysql.com/doc/refman/5.0/en/how
-
to
-
avoid
-
table
-
scan.html


There's a lot of in
-
depth optimization stuff on how to use EXPLAIN. If you feel like reading
documentation, check here:

http://dev.mysql.com/doc/refman/5.0/en
/using
-
explain.html


I also recently found and highly recommend a free MySQL manager application called
HeidiSQL that (among other things) makes it easy to create and update your indexes. Plus, when
you add indexes, it will show you the SQL code it ran to
create those indexes, making it a useful
learning tool.

http://www.heidisql.com/


There's also phpMyAdmin, which is installed on a lot of web hosts:

http://www.phpmyadmin.net



Using SELECT *

I'm guilty of it. It's far easier to write queries that use SELECT * and not have to worry about
typing out 10 fields names, but it COULD be the culprit that slows down your web application.
Here's a common mistak
e:


Let's say you run a web site that collects stories written by your members. All the stories are put
into one big table called stories. So far so good. But now let's say you have a query out there that
is used to create a menu to link to all the stories
:

1:

SELECT * FROM stories;



Well, if the CONTENTS of each story is in the stories table, then whenever you run the above
query, MySQL is also sending every letter of every story in the system back to your script. If you
have 1,000 stories that are about

10k each, then everytime someone views the menu, your script
is downloading 10 megabytes of extra data that it just throws away without using. What a waste!


Instead, try changing your query to something like:

1:

SELECT storyID,storyName,storyDate FROM st
ories;



Now we're only selecting a few fields that we need for the menu. Get into the habit of specifying
ONLY the fields your script needs, and you'll find that it's easier than you think, and your scripts
WILL run faster.


TIP:

There's a quick way to see a summary of all the fields in the table and what type of field
they are:

1:

DESCRIBE mytable;





The Full Text

Let's stick with the "stories" example above. People will probably want to search through stories
for specific wor
ds. If your story content is in a full
-
text field (e.g. TEXT datatype), then chances
are that you're searching like this:

1:

2:

SELECT storyID FROM stories


WHERE storyContent LIKE '%fondled the hubcaps%';





This probably runs quickly when you don't
have many stories, but it'll get slower and slower and
slower over time. In this case, consider an open
-
source product called Sphinx Search:

http://sphinxsearch.com/


It specializes in taking your full
-
t
ext content and making it searchable. A query that takes 10
seconds to run in MySQL could take 0.1 seconds in Sphinx, and that is not an exaggeration. The
downside is that it is a separate program / daemon, and requires a bit of know
-
how to set up and
get
running, but it's worth the time. They have community forums to help, and some people here
at Experts Exchange (like me), can also help.




Add LIMITs

This one's simple
-

if you only need a couple of rows out of thousands that are being returned
(e.g. gett
ing the top 10 of something), then add a LIMIT clause to the end of your query:

1:

2:

3:

SELECT storyID FROM stories


ORDER BY storyRating DESC


LIMIT 10;



It can sometimes be useful to run a query that counts the number of rows in your result before
pulling all of them. This can give you an idea of how to limit your rows or how to run your next
query (although this largely depends on your particular situation). Here's a way to quickly get the
number of stories from our example:

1:

SELECT COUNT(storyID
) AS storyCount FROM stories;



The results will be a row containing a field called "storyCount." This type of technique becomes
more useful as your database grows larger and larger.




The ORDER BY Dilemma

Using ORDER BY is great for sorting, but sometim
es it can create real slowdowns on MySQL.
When you ORDER BY a field, MySQL first finds all the rows that will be in your results, and
THEN goes back and re
-
orders them according to that ORDER BY field. If you have a lot of
rows, then MySQL has to do a lot
of re
-
ordering, which can be very slow.


In the above example on LIMITs, the query would have to sort every single story by its rating
before returning the top 10. However, if I know that all of the top 10 stories have a rating of 4 or
higher, then I coul
d reduce the number of stories to be sorted like this:

1:

2:

3:

4:

SELECT storyID FROM stories


WHERE storyRating >= 4


ORDER BY storyRating DESC


LIMIT 10;



Now MySQL may only have to sort through 100 stories instead of 10,000.


Sometimes it's worth asking yourself whether you REALLY need to use ORDER BY at all.
Sometimes it's faster to skip the ordering info altogether on the database and use PHP or
something else to handle the sorting (although MySQL is usually faster at it).


O
ne other trick is to create an index of the fields you're SELECTing and ORDERing BY. So if
you had a query:

1:

2:

SELECT storyID,storyRating FROM stories


ORDER BY storyRating DESC;



Then, the query could benefit a lot from an multi
-
field index of
storyID and storyRating.




============================================================

#2. The MySQL Configuration

============================================================

There are a LOT of ways to configure MySQL, but all of them start with the my.
cnf
configuration file (that's usually what it's called). Generally speaking, you can tune up MySQL
by telling it to cache things in memory. When it stores any data in memory, MySQL can access it
almost instantly instead of having to go back to the full da
tabase on the hard drive and look up
the requested data (which is slow).


Here's an example section of a my.cnf file (I've taken out some extra parameters that weren't
performance
-
related and some others that I won't discuss in this article):

1:

[mysqld]

2:

3:

4:

5:

6:

7:

8:

9:

10:

11:

12:

13:

14:

15:

16:

skip
-
name
-
resolve


query_cache_size = 16M


log
-
slow
-
queries=/var/log/slowqueries.log

long_query_time = 4

log
-
queries
-
not
-
using
-
indexes


table_cache = 512

tmp_table_size = 128M

max_heap_table_size = 128M

myisam_sort_buffer_size = 8M

sort_buffer_size = 8M

join_buffer_size = 256K

key_buffer = 128M



The first thing I always do is disable name resolution (skip
-
name
-
resolve). Basically, name
resolution just tries to look up a "caller ID" on whoever is connect
ing to the database. I still don't
know why it's enabled by default. It's not only a potential security problem, but it's usually
unnecessary for most web server setups (since the web server is the one that does the connecting,
not the visitors), and it ha
s the potential to crash the system (if your DNS goes down for a while
and MySQL gets filled up with connections that are waiting to be "resolved").


Next, enable the query cache (query_cache_size). In the above example, I've got a 16
-
megabyte
query cache.

Basically, if I run a query that takes 5 seconds to run, and then I refresh a page or
something (causing the query to run again), then the query will run instantly because MySQL
will remember the results of the query from the first time. If the tables inv
olved in the query get
changed, though, then it will clear any cached results that use those tables (so you're always
getting accurate data). Start with a 16
-
megabyte cache and work your way up as necessary (I'll
explain in a bit how to tell when to increa
se the cache).


Third, enable the slow query log (log
-
slow
-
queries and long_query_time and log
-
queries
-
not
-
using
-
indexes). This tells MySQL to keep track of all the queries that take longer than a certain
number of seconds (long_query_time) to complete. Th
e log
-
queries
-
not
-
using
-
indexes option
also includes queries that don't use indexes (simple enough). Just let the log sit for a day or two
while you use your application, and then look at it to find all the queries that need to be
optimized.


The last sect
ion of lines have several different purposes (caching joins, ORDER BY results,
temporary tables, etc), which all affect speed, but it's difficult to know exactly what values to use
sometimes. That's why I recommend using MySQLTuner:

http://wiki.mysqltuner.com/MySQLTuner


It's a Perl script that you just download and run on your database server after letting your server
run for a few days (without restarting). The script will look at all the s
tatistics that MySQL
collects and will make recommendations on what to change in your my.cnf file to make things
run better (like increasing query cache size or table_cache and that type of thing). It's pretty
straightforward and doesn't take long to run.




============================================================

#3. Instant Speed! Just Add Hardware!

============================================================

This is usually the most obvious answer. Upgrade to a faster CPU, add more RAM, etc... and
yo
u'll run faster. This is true, but there are a few things to know first.


First, the NUMBER of hard drives is more important than the SPACE. Some people make the
mistake of getting two 1
-
terabyte drives and just using those to run their database server. By

adding multiple hard disks in a RAID array (which is what most servers use anyway), you're
effectively distributing the load.


If two queries are running at the same time and you only have two hard drives, then there's a
good chance that the data for bot
h queries is located on the same hard drive. Since a hard drive
can only do one thing at a time, one of the queries will have to wait a little bit longer for the
other one to finish before it can run. But if you have, say, 6 hard drives or more (the more t
he
merrier), then one query might need data from Hard Drive #2 while the second query needs data
from Hard Drive #5. Both hard drives can work at the same time and send data back nearly
simultaneously. At least that's the gist of it, so spend money on mult
iple, fast hard disks for that
extra bump in speed. Hard disks are usually the biggest speed bottleneck anyway (hardware
-
wise).


Last point on hard disks
-

if someone else is setting up your server and wants to know what
RAID level you want to use, try to
use RAID 10 (safe + good performance). Otherwise, use
RAID 1 (safe). Other RAID levels have their advantages and disadvantages, but those are my
standard recommendations.


Second, there's usually not a direct correlation between RAM and speed. Just because

you add
more RAM doesn't mean the system automatically uses it (or uses it right). If you've got several
gigabytes of RAM already, then any additional RAM should probably go towards bigger caches.
There are other uses, as well (like increasing the number
of maximum connections), but if you're
reading this article, then you may not be at that point yet anyway.


Third, CPU is sort of like luck
-

it affects everything a little bit, and affects some things a LOT.
If you're writing a math
-
hungry application, cr
unching statistics, etc... then investing in the latest
and greatest CPU and motherboard may be a good choice. If it's just your standard web/database
server, then there are probably better ways of spending your money (
-
cough
-

more hard drives
-
cough
-
).


F
inally, if one server just can't handle all the traffic, consider setting up another server with dual
-
master replication for a quick
-
n
-
dirty way of load
-
balancing. (Note
-

replication doesn't actually
do load
-
balancing, it just keeps two servers in complet
e, realtime sync so you can send 50% of
visitors to one database server and the other 50% to the other server. It also makes for a handy
backup / failover system.)

http://www.neocodeso
ftware.com/replication/



FINAL TIP:

A lot of places will have a test database server and a real database server, but the
test database server only has very few rows in it (just enough for a basic test). This makes it easy
to misjudge your application's RE
AL performance. Try to make sure that your test database has
similar data to your real database so you get a better picture of how your queries will perform in
the real world. Many MySQL manager programs like phpMyAdmin and HeidiSQL make it easy
to downloa
d all the data from your real database so you can upload it into your test database.
(There's also a command line tool called mysqldump.)


That's it! If you've read this far, then you know all the major (and some minor) steps in
improving your MySQL perfor
mance, and you're now further ahead than most others who are
still trying to read through those boring, thick manuals. If you run into any trouble with any of
the points made here, please feel free to comment and/or post questions in the Experts Exchange
M
ySQL area.

Zone:

MySQL Server

Tags:

mysql, database, performance, optimization, indexes,
queries, slow, caching

Category:

Best Practices

Views: 3009

Twitter

Facebook

Linked In

Digg

Mixx

StumbleUpon


Most Helpful Articles:



WARNING: 5 Reasons why you should NEVER fi...



Why you shouldn't use PST files



How to Succeed in ... Anything



A Beginner's Guide to Onsite SEO Best Prac...



How Do I Know What to Charge as an IT Cons...

Related Articles:



How to s
ynchronize forms using own events ...



how to limit the scope of an advanced sear...



SARGable functions in SQL Server



Median, Mode, Skewness, and Kurtosis in MS...



Connect by prior






07/28/09 08:26 AM,
ID: 2361


mwvisa1
:

Another excellent article, gr8gonzo!






07/28/09 02:01 PM, ID: 2369


alainbryden
:

Absolutely brilliant,
professional quality advice for everyone, whether they're a lowly
programmer, or a data
-
warehouse manager. Well done!






07/28/09 02:42 PM, ID: 2374


tigermatt
:

Great article, thanks. I do

feel some info on the types of RAID array would enhance the section
on hard disks
-

in particular, it is worth mentioning plenty of mirrors (RAID 1 or RAID 10)
arrays are great
--

and RAID 5 can be a performance hit in most configurations.


I don't know h
ow MySQL handles transaction logs, so please correct me if I am wrong or it
doesn't use them: for most database applications, you should store the 2 separate components
-

transaction logs and the actual database files
-

on separate arrays (not partitions,
arrays). This
improves performance and also helps with disaster recovery.


-
Matt






07/28/09 06:03 PM, ID: 2375


gr8gonzo
:

Hi Matt,


Regarding RAID: RAID 1 actually should perform worse than

RAID 5. RAID 1 is mirroring, so
every byte is written twice (which means it's safer). RAID 5 in its most minimal configuration is
slightly faster than RAID 1, but gets faster as you add more drives. RAID configurations are a
whole 'nother article, but if
given the choice, I always use RAID 10
-

it has the performance of
RAID 0, while having the safety of RAID 1.


MySQL has different storage engines. The MyISAM engine, for example, does not require/use
transaction logs (unless you count the optional binary
log). The InnoDB engine, on the other
hand, uses a set of files for transaction logs, and is much more like MS SQL Server (judging by
your wording, I'm guessing that's what you use). Ultimately, separating transaction logs and
database files is the same co
ncept of spreading the load across disks. You can separate them on
different RAID arrays for even better performance, but that was a bit beyond the scope of the
article. Good points, though. :)






07/29/09 03:49 AM, ID: 2381


tigermatt
:


Agree. I always use RAID 10. My experience is indeed in SQL Server and Exchange, so that's
where my transaction log thinking came from.


-
Matt






07/29/09 02:16 PM, ID: 2399


b0lsc0tt
:

Great article and I loved the details.

The approach and flow was something that made this fun to
read and easy to review.

The examples are well thought and the sections, in my opinion
, cover
the common issues.

Both are done in a way so even the non
-
DBA can learn from this and apply
what you teach.

Thanks for the time and work on this contribution!


bol






08/03/09 06:51 AM, ID: 2491


angus_young_acdc
:

Great article.

Very informative (and now bookmarked by myself) and it has given me a few
handy hints for my own DB work.






08/12/09 01:37 PM, ID: 2683


intlgd
:

Thanks a lot! Very nice.






08/12/09 03:25 PM, ID: 2691


jason1178
:

Excellent.


I've already fixed a bunch of slow queries based on this.






08/12/09 03:43 PM, ID: 2692


DanRollins
:

Excellent article!

Thanks.






08/18/09 09:48 AM, ID: 2767


b10w01f
:

Thanks for input,

Very nice article. ^_^






08/20/09 12:32 AM, ID: 2833


Odengr
:

The MySQL

Tuner link is broken (he change his site to new one).


New URLfor now:
http://blog.mysqltuner.com/MySQLTuner



Great Article! :)






08/22/09 05:33 PM, ID: 2886


driver_x
:

awesome article very helpful. I have posted a question at
http://www.experts
-
exchange.com/Database/MySQL/Q
_24673292.html

i think its one that you would be able to
answer very easily and would enjoy your input






01/06/10 03:14 AM, ID: 7729


marklogan
:

You may also want to consider covering


USE

INDEX, IGNORE INDEX, and FORCE INDEX


http://dev.mysql.com/doc/refman/5.1/en/index
-
hints.html






01/06/10 06:40 PM, ID: 7768


gr8gonzo
:

Hi markogan,

Thanks for reading and for the comment. One reason that I did not cover those is because 99% of
the time, MySQL is smart enough to pick the right indexes. In my opinion, the
y're really only
useful in very specific scenarios that are not common to most users and their databases. If you
happen to have a huge table with many indexes that partially overlap each other, they could
come in handy, but again, it's a pretty narrow scen
ario for most people.


Information about advanced techniques like that should probably come from things like the
glorious MySQL performance blog (
www.mysqlperformanceblog.com
)
-

they would pr
obably
do a far better job at covering it than an extra paragraph in this article, too. :)






01/28/10 07:58 AM, ID: 8576


taskmgr
:

gr8gonzo

do you offer paid mysql

service , we need urgent help.


Thanks






01/31/10 12:51 PM, ID: 8866


ModernMatt
:

http:#c8576

edited to remove contact information.


ModernMatt

Experts Exchange Moderator





Please
Login

or
Register

to post a comment.









Speed Up Your Web Site With MySQL Query Caching

Февраль 25th, 2010 Posted in
Data base


One of the best ways to speed up your web application is to enable query caching in your
database, which caches commonly used SQL queries in memory for vir
tually instant access by
the next page that makes the same request.

MySQL is a rock solid, lighting fast database server which has been designed for two factors
speed and performance. It is the Ferrari of databases: Light weight, fast and Built for the hig
h
speed tracks! Learn how to get the best performance from you MySQL database server.
Optimize MySQL tables for speed

MySQL logo Just like you need to defrag and check your file system, it’s important to do the
same thing with MySQL tables. If you don’t, y
ou might end up with slow and corrupted database
tables. Follow this easy tutorial to set up an automatic optimization for your db.
Build Your
Own Database Driven Website Using PHP & MySQL: Part 4

you’ll learn how to take information stored in a database a
nd display it on a Web page for all to
see. So far you have installed and learned the basics of MySQL, a relational database engine, and
PHP, a server
-
side scripting language. Now you’ll see how to use these two new tools together to
create a true database
-
driven Web site!.
Optimize MySQL tables simple

Just like you need to defrag and check your file system, it’s important to do the same thing with
MySQL tables. If you don’t, you might end up with slow and corrupted database tables. Follow
this easy tutoria
l to set up an automatic optimization for your db. Speed in MySQL is all about
how fast the…
Getting to Grips with mySQL

This tutorial will show you how to create and query a database using SQL and the mySQL
command terminal.

Modern day web sites seem to b
e relying more and more on complex database systems. These
systems store all of their critical data, and allow for easy maintenance in some cases. The
Structured Query Language (SQL) is a very popular database language, and its standardization
makes it qui
te easy to store, update and access data. One of the most powerful SQL servers out
there is called MySQL and surprisingly enough.

MySQL is the most popular open source SQL database management system, is developed,
distributed, and supported by MySQL AB. My
SQL AB is a commercial company, founded by
the MySQL developers.

This tutorial will show you how to use MySQL querys to database in your PHP scripts.

This tutorial is the second part of the MySQL Optimization mini series and introduces the Slow
Query Log a
s well as describes its setup and output.

This is just a query tutorial. A query, for those of you who really are just starting, is a batch of
commands, or a group of commands.

A collection of 20 FAQs on MySQL SELECT query statements and basic features. Cl
ear
answers are provided with tutorial exercises on selecting rows and columns from tables and
views; sorting and counting query outputs; grouping outputs and applying group functions.

I find MySQL a very worthy alternative to SQL Server, especially in the

departmental
applications area and Small
-
Medium Business (SMB) market segment where SQL Server is
routinely found. Of course, MySQL excels in the Web, embedded, SaaS, Telco, and Enterprise
2.0 areas as well and has the premier customer rolodex to prove it
.

This MySQL tutorial is a guide for new MySQL users, providing a reference of very basic, step
by step, instructions on how to get started. This MySQL help page also gives more advanced
users assistance in upgrading and running multiple versions of MySQL
on a computer at the
same time.

MySQL Front allows anyone to easily create and manage their MySQL databases. In this tutorial
see how to get up and running with MySQL Front.

Learn how to optimize queries and indexes, and potentially speed up your application many
times over.The database is just too slow. Queries are queuing up, backlogs growing, users being
refused connection. Management is ready to spend millions on «upgradin
g» to some other
system, when the problem is really that MySQL is simply not being used properly. Badly defined
or non
-
existent indexes are one of the primary reasons for poor performance, and fixing these
can often lead to phenomenal improvements.

Mysql i
con Mysql now offers sub queries/sub selects. They can help you write faster and shorter
code. You don’t need to do that much programming in PHP, just let mysql do the work

Learn how to the execute mysql file. MySQL

Provide the executable file and uses the MySQL
Client interactive file by this command «shell> mysql db_name «.

Read a comprehensive tutorial on MySQL. The Structured Query Language is used in
manipulating data stored in Relational Database Management Sys
tems (RDBMS). SQL provides
commands through which data can be extracted, sorted, updated, deleted and inserted. SQL has
the full support of ANSI (American National Standards Institute), which has laid down certain
rules for the language.

MySQL is a freely
available open source Relational Database Management System [RDBMS], a
database engine of sorts that uses Structured Query Language (SQL). SQL is the most popular
language for adding, accessing, and processing data in a database, and is most noted for its
rapid
processing, proven reliability, and ease and flexibility of use.

In this article, we’re going to take a look at MySQL internals. It will be a fun, informative
examination of how all the different pieces and parts of the MySQL server operate together.

In this section you will learn how to install MySQL 5.0 on windows system. The MySQL 3.21
was the first version for the windows. Windows installer of MySQL includes auto installer with
Configuration Wizard that support for easy installation.

The MySQL Lib
rarian is a collection of community
-
generated and cross referenced content
related to MySQL. It’s a place where the community, collaboratively, builds and maintains
MySQL content.

How to efficiently select related data from two databases simultaneously usi
ng a single MySQL
query.

Sun Microsystems, Inc. today announced MySQL 5.4, a new version of the world’s most popular
open source database, designed to deliver significant performance and scalability improvements
to MySQL applications.

This tutorial will te
ach you how to import a tab delimited file in to a MySQL database. We built
a program to do this for Windows ISA Server logs which contained over 500,000 lines which
was way too many for Microsoft Excel to handle and too much for notepad to handle properly
.

It is a good idea to backup your MySQL data occasionally as a precautionary measure. It is also
a good idea to create a backup before making any major changes, incase something goes wrong
and you need to revert to the unmodified version. Database backups

can also be used to transfer
your database from one server to another if you change web hosts.

In this tutorial, we cover the following topics:A short introduction to relational databases and
relational modeling, A quick start guide to the winestore datab
ase and its full entity
-
relationship
model, The MySQL command interpreter and the basic features of MySQL, Using SQL to create
and modify databases, tables, and indexes, Using SQL to insert, delete, and update data, The
SQL SELECT statement for querying, w
ith examples of simple and advanced queries, Functions
and operators in SQL and MySQL, Advanced features, including managing indexes and keys,
tuning the MySQL DBMS, security, and the limitations of MySQL.

Mysql now offers sub queries/sub selects. They can

help you write faster and shorter code. You
dont need to do that much programming in PHP, just let mysql do the work A new thing in
MySql 5 is the possibility to use something called «sql subqueries». It can be a very powerful
new tool..

It’s time to add
some dynamic content to your website. The best choice for ease
-
of
-
use, price and
support is the combination of PHP and MySQL. This article introduces the beginner to using
MySQL with PHP.

Performance Tuning Primer Script is the shell script which gets perf
ormance related information
from MySQL engine and produce some recommendations for tuning of MySQL server variables.
It’s easy to use and offers valuable output for MySQL performance optimization.

The MySQL access control system enables you to create MySQL

accounts and define what each
account can do. Several types of privileges can be assigned to an account. They should be
granted according to how the account is to be used.

It’s important to know how to write your queries fast. And there are also ways to d
o your queries
faster Inside mysql, called Indexes. They help mysql do the work faster, but you need to tell
how…

This tutorial is the first part of MySQL optimization mini series and guides you thru the
configuration directives which have direct influence

on MySQL performance.

This tutorial will show you the essential steps to build and install MySQL Connector/C++ driver,
with simple examples to connect, insert, and retrieve data from a MySQL database.

We are going to take a look at MySQL internals. It wil
l be a fun, informative examination of
how all the different pieces and parts of the MySQL server operate together. MySQL’s
implementation is a fascinating mix of technology that is truly a remarkable achievement


an
achievement born from the sweat and in
spiration of numerous developers overmany years.

In the previous sections, you used mysql interactively to enter queries and view the results. You
can also run mysql in batch mode.

In this lesson you will read about the configuration of MySQL. The MySQL se
rver configuration
normally started during installation process.

You can create up to 4 MySQL databases using the «Add MySQL Database» function of your
Control Panel.

There are other enhancements in MySQL 5.4 as well such as the introduction of DTrace supp
ort
for MySQL on the Solaris platform, and a replication heartbeat, which helps ensure that
master/slave topologies are up and running. All in all, the 5.4 release focuses quite a lot on
transparent performance improvements and extra diagnostic and error h
andling capabilities,
which are things normally always welcomed by developers and DBAs.

The MySQL distribution provides several tools for database developers and administrators, but
they don’t always work everywhere. Fortunately, the worldwide MySQL commun
ity has
produced plenty of useful tools.

Source:
http://www.tutcity.com

Tags: