Search Results

Search found 1650 results on 66 pages for 'indexes'.

Page 55/66 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Cookie not working after mod rewrite rule

    - by moonwalker
    Hi all, I have a simple Cookie to set the chosen language: $lang = $_GET['lang']; $myLang = $_COOKIE["myLang"]; if (!isset($_COOKIE["myLang"])){ setcookie("myLang", "en", $expire); include "languages/en.php"; $myLang = "en"; }else{ include "languages/$myLang.php"; } // One year to expire $expire = time()+60*60*24*30*365; // Put $languages in a common header file. $languages = array('en' => 1, 'fr' => 2, 'nl' => 3); if (array_key_exists($lang, $languages)) { include "languages/{$lang}.php"; setcookie("myLang", $lang, $expire); $myLang = $lang; } After using some rewrite rules, it just doesn't work anymore. I tried the following: setcookie("myLang", "en", $expire, "/" , false); No luck at all. This is my .htaccess file: <IfModule mod_rewrite.c> Options +FollowSymLinks Options +Indexes RewriteEngine On RewriteBase / RewriteRule ^sort/([^/]*)/([^/]*)$ /3arsi2/sort.php?mode=$1&cat=$2 [L] RewriteRule ^category/([^/]*)$ /3arsi2/category.php?cat=$1 [L] RewriteRule ^category/([^/]*)/([^/]*)$ /3arsi2/category.php?cat=$1&lang=$2 [L] RewriteRule ^search/([^/]*)$ /3arsi2/search.php?mode=$1 [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^u/([^/]+)/?$ 3arsi2/user.php?user=$1 [NC,L] RewriteRule ^u/([^/]+)/(images|videos|music)/?$ 3arsi2/user.php?user=$1&page=$2 [NC,L] RewriteRule ^([^\.]+)$ 3arsi2/$1.php [NC,L] </IfModule> Any idea how to solve this? I'm still new to the mod rewrite thing, so I still don't really understand the logic behind it all. Thanks for any help you can provide.

    Read the article

  • Sql server query using function and view is slower

    - by Lieven Cardoen
    I have a table with a xml column named Data: CREATE TABLE [dbo].[Users]( [UserId] [int] IDENTITY(1,1) NOT NULL, [FirstName] [nvarchar](max) NOT NULL, [LastName] [nvarchar](max) NOT NULL, [Email] [nvarchar](250) NOT NULL, [Password] [nvarchar](max) NULL, [UserName] [nvarchar](250) NOT NULL, [LanguageId] [int] NOT NULL, [Data] [xml] NULL, [IsDeleted] [bit] NOT NULL,... In the Data column there's this xml <data> <RRN>...</RRN> <DateOfBirth>...</DateOfBirth> <Gender>...</Gender> </data> Now, executing this query: SELECT UserId FROM Users WHERE data.value('(/data/RRN)[1]', 'nvarchar(max)') = @RRN after clearing the cache takes (if I execute it a couple of times after each other) 910, 739, 630, 635, ... ms. Now, a db specialist told me that adding a function, a view and changing the query would make it much more faster to search a user with a given RRN. But, instead, these are the results when I execute with the changes from the db specialist: 2584, 2342, 2322, 2383, ... This is the added function: CREATE FUNCTION dbo.fn_Users_RRN(@data xml) RETURNS varchar(100) WITH SCHEMABINDING AS BEGIN RETURN @data.value('(/data/RRN)[1]', 'varchar(max)'); END; The added view: CREATE VIEW vwi_Users WITH SCHEMABINDING AS SELECT UserId, dbo.fn_Users_RRN(Data) AS RRN from dbo.Users Indexes: CREATE UNIQUE CLUSTERED INDEX cx_vwi_Users ON vwi_Users(UserId) CREATE NONCLUSTERED INDEX cx_vwi_Users__RRN ON vwi_Users(RRN) And then the changed query: SELECT UserId FROM Users WHERE dbo.fn_Users_RRN(Data) = '59021626919-61861855-S_FA1E11' Why is the solution with a function and a view going slower?

    Read the article

  • Ogre material scripts; how do I give a technique multiple lod_indexes?

    - by BlueNovember
    I have an Ogre material script that defines 4 rendering techniques. 1 using GLSL shaders, then 3 others that just use textures of different resolutions. I want to use the GLSL shader unconditionally if the graphics card supports it, and the other 3 textures depending on camera distance. At the moment my script is; material foo { lod_distances 1600 2000 technique shaders { lod_index 0 lod_index 1 lod_index 2 //various passes here } technique high_res { lod_index 0 //various passes here } technique medium_res { lod_index 1 //various passes here } technique low_res { lod_index 2 //various passes here } Extra information The Ogre manual says; Increasing indexes denote lower levels of detail You can (and often will) assign more than one technique to the same LOD index, what this means is that OGRE will pick the best technique of the ones listed at the same LOD index. OGRE determines which one is 'best' by which one is listed first. Currently, on a machine supporting the GLSL version I am using, the script behaves as follows; Camera 2000 : Shader technique Camera 1600 <= 2000 : Medium Camera <= 1600 : High If I change the lod order in shader technique to { lod_index 2 lod_index 1 lod_index 0 } The behaviour becomes; Camera 2000 : Low Camera 1600 <= 2000 : Medium Camera <= 1600 : Shader implying only the latest lod_index is used. If I change it to lod_index 0 1 2 It shouts at me Compiler error: fewer parameters expected in foo.material(#): lod_index only supports 1 argument So how do I specify a technique to have 3 lod_indexes? Duplication works; technique shaders { lod_index 0 //various passes here } technique shaders1 { lod_index 1 //passes repeated here } technique shaders2 { lod_index 2 //passes repeated here } ...but it's ugly.

    Read the article

  • apache web server configuration problem

    - by mohit
    i want to have apache server to serve only /var/www/ directory now it serves all my files on system from directory "/" i tried to edit httpd.conf placed in /etc/apache2 and placed the folllowing content in it(intially it was empty) <Directory /> Options None AllowOverride None </Directory> DocumentRoot "/var/www" <Directory "/var/www"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> then saved it,restarted apache server put the location /var/www in the web browser address bar,still it shows the higher level directories too then i edited the file Default,Default-ssl in the sites-available folder repeated the same process still apache serves all files on my system 2.when i try to use the following command gedit httpd.conf I get the error gedit:2696): EggSMClient-WARNING **: Failed to connect to the session manager: None of the authentication protocols specified are supported GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Failed to get connection to session: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.)

    Read the article

  • Searching with MATCH(), AGAINST() and AS score with mysqli and php

    - by Drew
    Below is the code I am using to search my table. I have made the relevant columns FULLTEXT in the table. This doesn't return me anything. Can someone tell me what it is that i'm doing wrong? Thanks in advance. $sql = 'SELECT id,uname,class,school, MATCH(uname, class, school) AGAINST(?) AS score FROM images WHERE MATCH(uname, class, school) AGAINST(? IN BOOLEAN MODE) ORDER BY score DES'; $stmt = $db_connection->prepare($sql); $stmt->bind_param('ss',$keyword,$keyword); $stmt->execute(); $stmt->store_result(); $stmt->bind_result($id,$uname,$class,$school); $xml = "<data>".PHP_EOL; while($stmt->fetch()){ $xml .= " <person>".PHP_EOL; $xml .= " <id>$id</id>".PHP_EOL; $xml .= " <name>$uname</name>".PHP_EOL; $xml .= " <class>$class</class>".PHP_EOL; $xml .= " <school>$school</school>".PHP_EOL; $xml .= " </person>".PHP_EOL; } $xml .= "</data>"; echo $xml; Below is an image of the indexes of the table:

    Read the article

  • Full-text search in C++

    - by Jen
    I have a database of many (though relatively short) HTML documents. I want users to be able to search this database by entering one or more search words in a C++ desktop application. Hence, I’m looking for a fast full-text search solution. Ideally, it should: Skip common words, such as the, of, and, etc. Support stemming, i.e. search for run also finds documents containing runner, running and ran. Be able to update its index in the background as new documents are added to the database. Be able to provide search word suggestions (like Google Suggest) To illustrate, assume the database has just two documents: Document 1: This is a test of text search. Document 2: Testing is fun. The following words should be in the index: fun, search, test, testing, text. If the user types t in the search box, I want the application to be able to suggest test, testing and text (Ideally, the application should be able to query the search engine for the 10 most common search words starting with t). A search for testing should return both documents. Can you suggest a C or C++ based solution? (I’ve briefly reviewed CLucene and Xapian, but I’m not sure if either will address my needs, especially querying the search word indexes for the suggest feature).

    Read the article

  • Optimizing an embedded SELECT query in mySQL

    - by Crazy Serb
    Ok, here's a query that I am running right now on a table that has 45,000 records and is 65MB in size... and is just about to get bigger and bigger (so I gotta think of the future performance as well here): SELECT count(payment_id) as signup_count, sum(amount) as signup_amount FROM payments p WHERE tm_completed BETWEEN '2009-05-01' AND '2009-05-30' AND completed > 0 AND tm_completed IS NOT NULL AND member_id NOT IN (SELECT p2.member_id FROM payments p2 WHERE p2.completed=1 AND p2.tm_completed < '2009-05-01' AND p2.tm_completed IS NOT NULL GROUP BY p2.member_id) And as you might or might not imagine - it chokes the mysql server to a standstill... What it does is - it simply pulls the number of new users who signed up, have at least one "completed" payment, tm_completed is not empty (as it is only populated for completed payments), and (the embedded Select) that member has never had a "completed" payment before - meaning he's a new member (just because the system does rebills and whatnot, and this is the only way to sort of differentiate between an existing member who just got rebilled and a new member who got billed for the first time). Now, is there any possible way to optimize this query to use less resources or something, and to stop taking my mysql resources down on their knees...? Am I missing any info to clarify this any further? Let me know... EDIT: Here are the indexes already on that table: PRIMARY PRIMARY 46757 payment_id member_id INDEX 23378 member_id payer_id INDEX 11689 payer_id coupon_id INDEX 1 coupon_id tm_added INDEX 46757 tm_added, product_id tm_completed INDEX 46757 tm_completed, product_id

    Read the article

  • Visual Studio + Database Edition + CDC = Deploy Fail

    - by Ben
    Hi All, I've got a database using change data capture (CDC) that is created from a Visual Studio database project (GDR2). My problem is that I have a stored procedure that is analyzing the CDC information and then returning data. How is that a problem you ask? Well, the order of operation is as follows. Pre-deployment Script Tables Indexes, keys, etc. Procedures Post-deployment Script Inside the post-deployment script is where I enable CDC. Here-in lies the problem. The procedure that is acting on the CDC tables is bombing because they don't exist yet! I've tried to put the call to sys.sp_cdc_enable_table in the script that creates the table, but it doesn't like that. Error 102 TSD03070: This statement is not recognized in this context. C:...\Schema Objects\Schemas\dbo\Tables\Foo.table.sql 20 1 Foo Is there a better/built-in way to enable CDC such that it's references are available when the stored procedures are created? Is there a way to run a script after tables are created but before other objects are created? How about a way to create the procedure dependencies be damned? Or maybe I'm just doing things that shouldn't be done?!?! Now, I have a work around. Comment out the sproc body Deploy (CDC is created) Uncomment sproc Deploy Everything is great until the next time I update a CDC tracked table. Then I need to comment out the 'offending' procedure. Thanks for reading my question and thanks for your help!

    Read the article

  • Django unit testing: South-migrated DB works in MySQL, throws duplicate PK error in PostGreSQL. Am I

    - by unclaimedbaggage
    Hi folks, (Worth starting off with a disclaimer: I'm very new to PostGreSQL) I have a django site which involves a standard app/tests.py testing file. If I migrate the DB to MySQL (through South),, the tests all pass. However in PostGresQL, I'm getting the following error: IntegrityError: duplicate key value violates unique constraint "business_contact_pkey" Note this happens while unit testing only - the actual page runs fine in both MySQL & PostGresql. Really having a heckuva time figuring this one out. Anyone have ideas? Below are the Postgresql "\d business_contact" & offending tests.py method if they help. No changes made to either DB except the (same) South migrations Thanks first_name | character varying(200) | not null mobile_phone | character varying(100) | surname | character varying(200) | not null business_id | integer | not null created | timestamp with time zone | not null deleted | boolean | not null default false updated | timestamp with time zone | not null slug | character varying(150) | not null phone | character varying(100) | email | character varying(75) | id | integer | not null default nextval('business_contact_id_seq'::regclass) Indexes: "business_contact_pkey" PRIMARY KEY, btree (id) "business_contact_slug_key" UNIQUE, btree (slug) "business_contact_business_id" btree (business_id) Foreign-key constraints: "business_id_refs_id_772cc1b7b40f4b36" FOREIGN KEY (business_id) REFERENCES business(id) DEFERRABLE INITIALLY DEFERRED Referenced by: TABLE "business" CONSTRAINT "primary_contact_id_refs_id_dfaf59c4041c850" FOREIGN KEY (primary_contact_id) REFERENCES business_contact(id) DEFERRABLE INITIALLY DEFERRED TEST DEF: def test_add_business_contact(self): """ Add a business contact """ contact_slug = 'test-new-contact-added-new-adf' business_id = 1 business = Business.objects.get(id=business_id) postdata = { 'first_name': 'Test', 'surname': 'User', 'business': '1', 'slug': contact_slug, 'email': '[email protected]', 'phone': '12345678', 'mobile_phone': '9823452', 'business': 1, 'business_id': 1, } #Test to ensure contacts that should not exist are not returned contact_not_exists = Contact.objects.filter(slug=contact_slug) self.assertFalse(contact_not_exists) #Add the contact and ensure it is present in the DB afterwards """ contact_add_url = '%s%s/contact/add/' % (settings.BUSINESS_URL, business.slug) self.client.post(contact_add_url, postdata) added_contact = Contact.objects.filter(slug=contact_slug) print added_contact try: self.assertTrue(added_contact) except: formset = ContactForm(postdata) print formset.errors self.assertFalse(True, "Contact not found in the database - most likely, the post values in the test didn't validate against the form")

    Read the article

  • php automatically commented with apache

    - by clement
    We have installed apache 2.2, and activeperl to run bugzilla, all that on a Windows Server 2003. Here We want to install PHP on the server to install a wiki. I followed those steps: tutorial to install PHP and enable it from Apache. After all those steps, I restart couples of times, and When I try a simple phpinfo() on PHP, the whole PHP code is commented: < ! - - ?php phpinfo(); ? - - Now, the httpd.conf was already edited for the PERL and it can be those edits that make the mistake. Here is the whole httpd.conf file: ServerRoot "C:/Program Files/Apache Software Foundation/Apache2.2" Listen 6969 LoadModule actions_module modules/mod_actions.so LoadModule alias_module modules/mod_alias.so LoadModule asis_module modules/mod_asis.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule php5_module "c:/php/php5apache2_2.dll" LoadModule authn_default_module modules/mod_authn_default.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule cgi_module modules/mod_cgi.so LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule include_module modules/mod_include.so LoadModule isapi_module modules/mod_isapi.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule setenvif_module modules/mod_setenvif.so User daemon Group daemon ServerAdmin [email protected] DocumentRoot C:/bugzilla-4.4.2/ Options FollowSymLinks AllowOverride None Order deny,allow Deny from all Options Indexes FollowSymLinks ExecCGI AllowOverride All Order allow,deny Allow from all ScriptInterpreterSource Registry-Strict DirectoryIndex index.html index.html.var index.cgi index.php Order allow,deny Deny from all Satisfy All ErrorLog "logs/error.log" LogLevel warn LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> ScriptAlias /cgi-bin/ "C:/Program Files/Apache Software Foundation/Apache2.2/cgi-bin/" AllowOverride None Options None Order allow,deny Allow from all DefaultType text/plain AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddHandler cgi-script .cgi AddType application/x-httpd-php .php SSLRandomSeed startup builtin SSLRandomSeed connect builtin PHPIniDir "c:/php"

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • Guid Primary /Foreign Key dilemma SQL Server

    - by Xience
    Hi guys, I am faced with the dilemma of changing my primary keys from int identities to Guid. I'll put my problem straight up. It's a typical Retail management app, with POS and back office functionality. Has about 100 tables. The database synchronizes with other databases and receives/ sends new data. Most tables don't have frequent inserts, updates or select statements executing on them. However, some do have frequent inserts and selects on them, eg. products and orders tables. Some tables have upto 4 foreign keys in them. If i changed my primary keys from 'int' to 'Guid', would there be a performance issue when inserting or querying data from tables that have many foreign keys. I know people have said that indexes will be fragmented and 16 bytes is an issue. Space wouldn't be an issue in my case and apparently index fragmentation can also be taken care of using 'NEWSEQUENTIALID()' function. Can someone tell me, from there experience, if Guid will be problematic in tables with many foreign keys. I'll be much appreciative of your thoughts on it...

    Read the article

  • What noncluster index would be better to create on SQL Server?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. SELECT CustomerName FROM Customers Which leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 For the first attempt to improve performance, I've created a nonclustered index for this table: CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this first try, I've executed the select statement and the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 Now the second try, I've deleted this nonclustered index in order to create a new one. CREATE NONCLUSTERED INDEX [IX_CategoryName] ON [dbo].[Categories] ( [CategoryId] ASC ) INCLUDE ( [CategoryName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this second try, I've executed the select statement and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 Am I doing something wrong or this is expected? Shall I use the first nonclustered index with two fields, or the second nonclustered with one field (CategoryID) including the second field (CategoryName)?

    Read the article

  • how to Solve the "Digg" problem in MongoDB

    - by user193116
    A while back,a Digg developer had posted this blog ,"http://about.digg.com/blog/looking-future-cassandra", where the he described one of the issues that were not optimally solved in MySQL. This was cited as one of the reasons for their move to Cassandra. I have been playing with MongoDB and I would like to understand how to implement the MongoDB collections for this problem From the article, the schema for this information in MySQL : CREATE TABLE Diggs ( id INT(11), itemid INT(11), userid INT(11), digdate DATETIME, PRIMARY KEY (id), KEY user (userid), KEY item (itemid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE Friends ( id INT(10) AUTO_INCREMENT, userid INT(10), username VARCHAR(15), friendid INT(10), friendname VARCHAR(15), mutual TINYINT(1), date_created DATETIME, PRIMARY KEY (id), UNIQUE KEY Friend_unique (userid,friendid), KEY Friend_friend (friendid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; This problem is ubiquitous in social networking scenario implementation. People befriend a lot of people and they in turn digg a lot of things. Quickly showing a user what his/her friends are up to is very critical. I understand that several blogs have since then provided a pure RDBMs solution with indexes for this issue; however I am curious as to how this could be solved in MongoDB.

    Read the article

  • Calculating and saving space in Postgresql

    - by punkish
    I have a table in Pg like so CREATE TABLE t ( a BIGSERIAL NOT NULL, -- 8 b b SMALLINT, -- 2 b c SMALLINT, -- 2 b d REAL, -- 4 b e REAL, -- 4 b f REAL, -- 4 b g INTEGER, -- 4 b h REAL, -- 4 b i REAL, -- 4 b j SMALLINT, -- 2 b k INTEGER, -- 4 b l INTEGER, -- 4 b m REAL, -- 4 b CONSTRAINT a_pkey PRIMARY KEY (a) ) The above adds up to 50 bytes per row. My experience is that I need another 40% to 50% for system overhead, without even any user-created indexes to the above. So, about 75 bytes per row. I will have many, many rows in the table, potentially upward of 145 billion rows, so the table is going to be pushing 13-14 Terabytes. What tricks, if any, could I use to compact this table? My possible ideas below -- Convert the REAL values to INTEGERs. If they can stored as SMALLINT, that is a saving of 2 bytes per field. Convert the columns b .. m into an array. I don't need to search on those columns, but I do need to be able to return one column's value at a time. So, if I need column g, I could do something like SELECT a, arr[5] FROM t; Would I save space with the array option? Would there be a speed penalty? Any other ideas?

    Read the article

  • django: results in in_bulk style without IDs

    - by valya
    in django 1.1.1, Place.objects.in_bulk() does not work and Place.objects.in_bulk(range(1, 100)) works and returns a dictionary of Ints to Places with indexes - primary keys. How to avoid using range in this situation (and avoid using a special query for ids, I just want to get all objects in this dictionary format) >>> Place.objects.in_bulk() Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/lib/python2.5/site-packages/Django-1.1.1-py2.5.egg/django/db/models/manager.py", line 144, in in_bulk return self.get_query_set().in_bulk(*args, **kwargs) TypeError: in_bulk() takes exactly 2 arguments (1 given) >>> Place.objects.in_bulk(range(1, 100)) {1L: <Place: "??? ????">, 3L: <Place: "???????????? ?????">, 4L: <Place: "????????? "??????"">, 5L: <Place: "????????? "??????"">, 8L: <Place: "????????? "??????????????"">, 9L: <Place: "??????? ????????">, 10L: <Place: "????????? ???????">, 11L: <Place: "??????????????? ???">, 14L: <Place: "????? ????? ??????">}

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

  • Mysql slow query: INNER JOIN + ORDER BY causes filesort

    - by Alexander
    Hello! I'm trying to optimize this query: SELECT `posts`.* FROM `posts` INNER JOIN `posts_tags` ON `posts`.id = `posts_tags`.post_id WHERE (((`posts_tags`.tag_id = 1))) ORDER BY posts.created_at DESC; The size of tables is 38k rows, and 31k and mysql uses "filesort" so it gets pretty slow. I tried to use different indexes, no luck. CREATE TABLE `posts` ( `id` int(11) NOT NULL auto_increment, `created_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_on_created_at` (`created_at`), KEY `for_tags` (`trashed`,`published`,`clan_private`,`created_at`) ) ENGINE=InnoDB AUTO_INCREMENT=44390 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci CREATE TABLE `posts_tags` ( `id` int(11) NOT NULL auto_increment, `post_id` int(11) default NULL, `tag_id` int(11) default NULL, `created_at` datetime default NULL, `updated_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_tags_on_post_id_and_tag_id` (`post_id`,`tag_id`) ) ENGINE=InnoDB AUTO_INCREMENT=63175 DEFAULT CHARSET=utf8 +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | 1 | SIMPLE | posts_tags | index | index_post_id_and_tag_id | index_post_id_and_tag_id | 10 | NULL | 24159 | Using where; Using index; Using temporary; Using filesort | | 1 | SIMPLE | posts | eq_ref | PRIMARY | PRIMARY | 4 | .posts_tags.post_id | 1 | | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ 2 rows in set (0.00 sec) What kind of index I need to define to avoid mysql using filesort? Is it possible when order field is not in where clause?

    Read the article

  • Apache console accesses network drives, service does not?

    - by danspants
    I have an apache 2.2 server running Django. We have a network drive T: which we need constant access to within our Django app. When running Apache as a service, we cannot access this drive, as far as any django code is concerned the drive does not exist. If I add... <Directory "t:/"> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> to the httpd.conf file the service no longer runs, but I can start apache as a console and it works fine, Django can find the network drive and all is well. Why is there a difference between the console and the service? Should there be a difference? I have the service using my own log on so in theory it should have the same access as I do. I'm keen to keep it running as a service as it's far less obtrusive when I'm working on the server (unless there's a way to hide the console?). Any help would be most appreciated.

    Read the article

  • wordpress 500 - Internal server error

    - by asad
    Hello Folks , I installed the wordpress 2.9.2 a few days ago and it works correctly. today , i want to use permlink feature of wordpress. I know , must modify my .htaccess file on my site root. but on my sub-domain root there is no any .htaccess file . so i create my .htacess file with follow content on sub-domain root (near index.php file): <files .htaccess> order allow,deny deny from all </files> ServerSignature Off <files wp-config.php> order allow,deny deny from all </files> # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress Options All -Indexes AddType x-mapp-php5 .php AddHandler x-mapp-php5 .php But after save it , i missed my blog . And i get follow error : 500 - Internal server error. There is a problem with the resource you are looking for, and it cannot be displayed. after this i remove the .htaccess file , but this was not correct. What i can do for it? Cheers

    Read the article

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

  • hibernate executeUpdate IndexOutOfBounds

    - by luke
    I am trying to use an HQL to perform a simple update in hibernate, but i can't seem to get it to work. i have a query template defined as: private static final String CHANGE_DEVICE_STATUS = "UPDATE THING" +"SET ACTIVE = ? " +"WHERE ID = ?"; and then i try to execute it like this: Session s = HibernateSessionFactory.getSession(); Query query = s.createQuery(CHANGE_DEVICE_STATUS); query.setBoolean(0, is_active); query.setLong(1, id); query.executeUpdate(); But now i get this error: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at java.util.ArrayList.RangeCheck(ArrayList.java:547) at java.util.ArrayList.get(ArrayList.java:322) at org.hibernate.hql.ast.HqlSqlWalker.postProcessUpdate(HqlSqlWalker.java:390) at org.hibernate.hql.antlr.HqlSqlBaseWalker.statement(HqlSqlBaseWalker.java:164) at org.hibernate.hql.ast.QueryTranslatorImpl.analyze(QueryTranslatorImpl.java:189) at org.hibernate.hql.ast.QueryTranslatorImpl.doCompile(QueryTranslatorImpl.java:130) at org.hibernate.hql.ast.QueryTranslatorImpl.compile(QueryTranslatorImpl.java:83) at org.hibernate.impl.SessionFactoryImpl.getQuery(SessionFactoryImpl.java:427) at org.hibernate.impl.SessionImpl.getQueries(SessionImpl.java:884) at org.hibernate.impl.SessionImpl.executeUpdate(SessionImpl.java:865) at org.hibernate.impl.QueryImpl.executeUpdate(QueryImpl.java:89) .... what am i doing wrong here? I am using hibernate 3.0 UPDATE i changed it to Query query = s.createQuery(CHANGE_DEVICE_STATUS); query.setBoolean(1, is_active); query.setLong(2, id);//<---throws here query.executeUpdate(); without changing anything else but the parameter indexes and i got this: java.lang.IllegalArgumentException: Positional parameter does not exist: 2 in query: UPDATE DEVICE_INSTANCES SET ACTIVE = ? WHERE DEVICE_INSTANCE_ID = ? at org.hibernate.impl.AbstractQueryImpl.setParameter(AbstractQueryImpl.java:194) at org.hibernate.impl.AbstractQueryImpl.setLong(AbstractQueryImpl.java:244) ...

    Read the article

  • Entity Framework + MySQL - Why is the performance so terrible?

    - by Cyril Gupta
    When I decided to use an OR/M (Entity Framework for MySQL this time) for my new project I was hoping it would save me time, but I seem to have failed it (for the second time now). Take this simple SQL Query SELECT * FROM POST ORDER BY addedOn DESC LIMIT 0, 50 It executes and gives me results in less than a second as it should (the table has about 60,000 rows). Here's the equivalent LINQ To Entities query that I wrote for this var q = (from p in db.post orderby p.addedOn descending select p).Take(50); var q1 = q.ToList(); //This is where the query is fetched and timed out But this query never even executes it times out ALWAYS (without orderby it takes 5 seconds to run)! My timeout is set to 12 seconds so you can imagine it is taking much more than that. Why is this happening? Is there a way I can see what is the actual SQL Query that Entity Framework is sending to the db? Should I give up on EF+MySQL and move to standard SQL before I lose all eternity trying to make it work? I've recalibrated my indexes, tried eager loading (which actually makes it fail even without the orderby clause) Please help, I am about to give up OR/M for MySQL as a lost cause.

    Read the article

  • Zend Framework - Deny access to folders other than public folder

    - by Vincent
    All, I have the following Zend application structure: helloworld - application - configs - controllers - models - layouts - include - library - public - .htaccess - index.php - design - .htaccess The .htaccess in the root folder has the following contents: ##################################################### # CONFIGURE media caching # Header unset ETag FileETag None Header unset Last-Modified Header set Expires "Fri, 21 Dec 2012 00:00:00 GMT" Header set Cache-Control "max-age=7200, must-revalidate" SetOutputFilter DEFLATE # ##################################################### ErrorDocument 404 /custom404.php RedirectMatch permanent ^/$ /public/ The .htaccess in the public folder has the following: Options -MultiViews ErrorDocument 404 /custom404.php RewriteEngine on # The leading %{DOCUMENT_ROOT} is necessary when used in VirtualHost context RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -s [OR] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -l [OR] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] My vhost configuration is as under: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "C:\\xampp\\htdocs\\xampp\\helloworld\\" ServerName helloworld ServerAlias helloworld <Directory "C:\\xampp\\htdocs\\xampp\\helloworld\\"> Options Indexes FollowSymLinks AllowOverride all Order Deny,Allow Deny from all Allow from 127.0.0.1 </Directory> </VirtualHost> Currently, if the user visits, http://localhost, my .htaccess files above make sure, the request is routed to http://localhost/public automatically. If the user visits any other folder apart from public folder from the address bar, he gets a directory listing of that folder. How can I make sure to deny the user access to every other folder except the public folder? I want the user to be redirected to the public folder if he visits any other folder. However, if the underlying code requests something from other folders, (ex: ) it should still work.. Thanks

    Read the article

  • Table index design

    - by Swoosh
    I would like to add index(s) to my table. I am looking for general ideas how to add more indexes to a table. Other than the PK clustered. I would like to know what to look for when I am doing this. So, my example: This table (let's call it TASK table) is going to be the biggest table of the whole application. Expecting millions records. IMPORTANT: massive bulk-insert is adding data in this table table has 27 columns: (so far, and counting :D ) int x 9 columns = id-s varchar x 10 columns bit x 2 columns datetime x 5 columns INT COLUMNS all of these are INT ID-s but from tables that are usually smaller than Task table (10-50 records max), example: Status table (with values like "open", "closed") or Priority table (with values like "important", "not so important", "normal") there is also a column like "parent-ID" (self - ID) join: all the "small" tables have PK, the usual way ... clustered STRING COLUMNS there is a (Company) column (string!) that is something like "5 characters long all the time" and every user will be restricted using this one. If in Task there are 15 different "Companies" the logged in user would only see one. So there's always a filter on this one. Might be a good idea to add an index to this column? DATE COLUMNS I think they don't index these ... right? Or can / should be?

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >