MySQL. There is a table with the news, many records are already about 70 thousand and will grow.
The structure is this:
CREATE TABLE IF NOT EXISTS `news` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`id_section` int(11) NOT NULL,
`title` varchar(250) NOT NULL,
`image` varchar(250) DEFAULT NULL,
`url` varchar(250) NOT NULL,
`timestamp` int(10) unsigned NOT NULL,
`active` tinyint(1) unsigned DEFAULT '1',
PRIMARY KEY (`id`),
KEY `id_section` (`id_section`),
KEY `timestamp` (`timestamp`),
KEY `title` (`title`),
KEY `active` (`active`),
KEY `url` (`url`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=69653 ;
Here's the problem: records are added automatically so that the same entry can be added multiple times for different id_section
Therefore, if you receive data without specifying id_section
(show news from all sections) climbs duplicate records. And, as a rule, they are consecutive. It's bad.
On the table of a small size solution was this query:
SELECT `news`.* FROM `news` WHERE (active = 1) GROUP BY `url` ORDER BY `timestamp` desc LIMIT 10 OFFSET 20
However, now this query executes in 4-5 seconds (!!!).We need a solution that will allow to reach the target of at least 0.5 sec.Note: no GROUP BY, this query performs a 0.7 second. Despite the fact that other queries of small tables, take microseconds.
Any proposal for optimization — not only for this request. Perhaps there are some special techniques for solving such problems.