Search Results

Search found 32994 results on 1320 pages for 'second level cache'.

Page 476/1320 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • On Windows 7, how do I fix my cmd.exe icon and remove cruft from the jumplist

    - by sb3700
    Hi. When installing drivers for my Gigabyte motherboard, I installed a "Games" link which ran from a batch file. This was pinned to the taskbar by default. As a result, it changed the icon for cmd.exe to the icon for Games. I uninstalled the Games and it got rid of the icon leaving it with a white rectangle thing (I can post screenshots on request). There is also a link on the jumplist to open Games, which just opens a cmd window. I've tried rebuilding my icon cache as per Changing Windows 7 pinned taskbar icons, but this only removed the white rectangle icon, leaving me with no real icon. c:\windows\system32\cmd.exe still has the appropriate icon in explorer, just not on the taskbar. Any ideas on how to fix this annoyance?

    Read the article

  • HMVC or PAC - how to handle shared abstractions/models?

    - by fig-gnuton
    In HMVC/PAC, what's the recommended way to code if two or more triads/agents share a common model/abstraction? Do you instantiate a new instance of that model wherever needed, and propogate a change in one to all the other instances via the controllers? Or do instantiate one model at some common upper level, and inject that instance wherever needed? (Or neither if I'm missing something fundamental about these patterns?)

    Read the article

  • How to reduce disk thrashing (paging)?

    - by skevar7
    I have 4 GB of RAM, but Windows still thrashes disk sometimes (especially often when an application is minimized for some time and then I activate it again). Completely stupid, because Task Manager shows 2 GB of RAM are free. Is there any way to prevent Windows swapping out program memory? I tried setting Superfetch to cache startup files only (it helped a bit) and turning off paging file (it helped much, and worked well for me in Windows XP; but Windows Vista/Windows 7 don't allow that - it shows "low on memory" message frequently, even when I have 1 GB of RAM free.) What can you advise me to do?

    Read the article

  • If a nonblocking recv with MSG_PEEK succeeds, will a subsequent recv without MSG_PEEK also succeed?

    - by Michael Wolf
    Here's a simplified version of some code I'm working on: void stuff(int fd) { int ret1, ret2; char buffer[32]; ret1 = recv(fd, buffer, 32, MSG_PEEK | MSG_DONTWAIT); /* Error handling -- and EAGAIN handling -- would go here. Bail if necessary. Otherwise, keep going. */ /* Can this call to recv fail, setting errno to EAGAIN? */ ret2 = recv(fd, buffer, ret1, 0); } If we assume that the first call to recv succeeds, returning a value between 1 and 32, is it safe to assume that the second call will also succeed? Can ret2 ever be less than ret1? In which cases? (For clarity's sake, assume that there are no other error conditions during the second call to recv: that no signal is delivered, that it won't set ENOMEM, etc. Also assume that no other threads will look at fd. I'm on Linux, but MSG_DONTWAIT is, I believe, the only Linux-specific thing here. Assume that the right fnctl was set previously on other platforms.)

    Read the article

  • Drupal advanced ACLs for "untrusted" administrators

    - by redShadow
    I have a multi-site Drupal-6 installation containing websites of different customers. On each site, there is an "administrator" role that includes mainly the customer's account. We want to give as many permissions as possible to this privileged user, but this could bring to security leaks using just the Drupal Core permissions management system. The main thing to avoid is the customer account being able to run PHP code on the server (that would be like being logged on the server as the www-data user.. sounds really bad). To avoid that, it is not sufficient to deny PHP code evaluation for the role. Since the administrator role must have permissions to manage users, he could also change the password of the user #1 and login in the site as superadmin. The second goal would be to deny also some "confusing" administrative pages (such as module selection) but not others (such as site informations configuration, or theme selection, etc.) I found the User One module that seems to fix the first problem, but I have no idea on how to solve the second one. I found some modules around, but no-one seems to fit.. it seems like the most ACLs are thought to protect the content, and not the site itself, as if the site administrator would always be the server owner itself..

    Read the article

  • UIWebView or Custom UIView

    - by dinoc
    I have a dilemma and was wondering if anyone else had this issue. I have a Navigation based app that when you drill down to the last level, that last view has lots and lots of information and details. I am wondering if I should just use a UIWebView to display the details of information or if I should create a custom UIView with all the labels and outlets set in that UIView. Which way is recommended or the preferred way of doing something like this?

    Read the article

  • Android - Two different programs at the same time in an emulator

    - by Léa Massiot
    I'm new to Android development. My OS is WinXP. I'm trying to install two different applications on an Android Device Emulator in command line. I have two Android projects "ap1" and "ap2". In the "ap1" project directory, I ran "ant debug". I got an "ap1.apk" executable. In the "ap2" project directory, I ran "ant debug". I got an "ap2.apk" executable. I created an Android Virtual Device: cmd_line android create avd -n avd1 -t 1 --abi x86 I launched the emulator: cmd_line emulator -avd avd1 -verbose The "adb devices" command returns: List of devices attached emulator-5554 device I installed the first program on the emulator: cmd_line adb -s emulator-5554 install "ap1.apk" I ran the program: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg.android/.Activity1 = It worked. I installed the second program on the emulator: cmd_line adb -s emulator-5554 install "ap2.apk" I ran the program: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg2.android/.AnotherActivity1 = It worked. All this works except that the second executable "replaced" of the first one. If I try to run the first executable, I get an error: cmd_line adb shell am start -a android.intent.action.MAIN -n my.pkg.android/.Activity1 Starting: Intent { act=android.intent.action.MAIN cmp=my.pkg.android/.Activity1 } Error type 3 Error: Activity class {my.pkg.android/my.pkg.android.Activity1} does not exist. It looks like I can't have the two apps at the same time in the emulator. What do you think? What do I have to do to have the two apps available (at the same time) in the emulator? Thank you for helping. Best regards.

    Read the article

  • How to find an embedded platform?

    - by gmagana
    I am new to the locating hardware side of embedded programming and so after being completely overwhelmed with all the choices out there (pc104, custom boards, a zillion option for each board, volume discounts, devel kits, ahhh!!) I am asking here for some direction. Basically, I must find a new motherboard and (most likely) re-implement the program logic. Rewriting this in C/C++/Java/C#/Pascal/BASIC is not a problem for me. so my real problem is finding the hardware. This motherboard will have several other devices attached to it. Here is a summary of what I need to do: Required: 2 RS232 serial ports (one used all the time for primary UI, the second one not continuous) 1 modem (9600+ baud ok) [Modem will be in simultaneous use with only one of the serial port devices, so interrupt sharing with one serial port is OK, but not both] Minimum permanent/long term storage: Whatever O/S requires + 1 MB (executable) + 512 KB (Data files) RAM: Minimal, whatever the O/S requires plus maybe 1MB for executable. Nice to have: USB port(s) Ethernet network port Wireless network Implementation languages (any O/S I will adapt to): First choice Java/C# (Mono ok) Second choice is C/Pascal Third is BASIC Ok, given all this, I am having a lot of trouble finding hardware that will support this that is low in cost. Every manufacturer site I visit has a lot of options, and it's difficult to see if their offering will even satisfy my must-have requirements (for example they sometimes list 3 "serial ports", but it appears that only one of the three is RS232, for example, and don't mention what the other two are). The #1 constraint is cost, #2 is size. Can anyone help me with this? This little task has left me thinking I should have gone for EE and not CS :-). EDIT: A bit of background: This is a system currently in production, but the original programmer passed away, and the current hardware manufacturer cannot find hardware to run the (currently) DOS system, so I need to reimplement this in a modern platform. I can only change the programming and the motherboard hardware.

    Read the article

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

  • What is a good PDF report generator tool for python?

    - by jlouis
    What is a good tool for PDF report generation in Python? I've checked out ReportLab, but it seems to be awfully low-level for what I want to do. My current hunch is to call TeX on the command-line and let it produce the PDF, but if there is something that is easier to work with (and looks professional - We'll send this to customers) I'd very much like a prod in the right direction.

    Read the article

  • How to change the date/time in Python for all modules?

    - by Felix Schwarz
    When I write with business logic, my code often depends on the current time. For example the algorithm which looks at each unfinished order and checks if an invoice should be sent (which depends on the no of days since the job was ended). In these cases creating an invoice is not triggered by an explicit user action but by a background job. Now this creates a problem for me when it comes to testing: I can test invoice creation itself easily However it is hard to create an order in a test and check that the background job identifies the correct orders at the correct time. So far I found two solutions: In the test setup, calculate the job dates relative to the current date. Downside: The code becomes quite complicated as there are no explicit dates written anymore. Sometimes the business logic is pretty complex for edge cases so it becomes hard to debug due to all these relative dates. I have my own date/time accessor functions which I use throughout my code. In the test I just set a current date and all modules get this date. So I can simulate an order creation in February and check that the invoice is created in April easily. Downside: 3rd party modules do not use this mechanism so it's really hard to integrate+test these. The second approach was way more successful to me after all. Therefore I'm looking for a way to set the time Python's datetime+time modules return. Setting the date is usually enough, I don't need to set the current hour or second (even though this would be nice). Is there such a utility? Is there an (internal) Python API that I can use?

    Read the article

  • How to output multicolumn html without "widows"?

    - by user314850
    I need to output to HTML a list of categorized links in exactly three columns of text. They must be displayed similar to columns in a newspaper or magazine. So, for example, if there are 20 lines total the first and second columns would contain 7 lines and the last column would contain 6. The list must be dynamic; it will be regularly changed. The tricky part is that the links are categorized with a title and this title cannot be a "widow". If you have a page layout background you'll know that this means the titles cannot be displayed at the bottom of the column -- they must have at least one link underneath them, otherwise they should bump to the next column (I know, technically it should be two lines if I were actually doing page layout, but in this case one is acceptable). I'm having a difficult time figuring out how to get this done. Here's an example of what I mean: Shopping Link 3 Link1 Link 1 Link 4 Link2 Link 2 Link 3 Link 3 Cars Link 1 Music Games Link 2 Link 1 Link 1 Link 2 News As you can see, the "News" title is at the bottom of the middle column, and so is a "widow". This is unacceptable. I could bump it to the next column, but that would create an unnecessarily large amount of white space at the bottom of the second column. What needs to happen instead is that the entire list needs to be re-balanced. I'm wondering if anyone has any tips for how to accomplish this, or perhaps source code or a plug in. Python is preferable, but any language is fine. I'm just trying to get the general concept down.

    Read the article

  • LVM mirroring VS RAID1

    - by syrenity
    Hi. Having learned a bit about LVM mirroring, I thought about replacing the current RAID-1 scheme I'm using to gain some flexibility. Problem is that according to what I found on the Internet, LVM is: 1) Slower then RAID-1, at least in reading (as only single volume being used for reading). 2) Non-reliable on power interrupts, and requires disk cache disabling for prevention of data loss. http://www.joshbryan.com/blog/2008/01/02/lvm2-mirrors-vs-md-raid-1/ Also it seems, at least to several setup guides I read (http://www.tcpdump.com/kb/os/linux/lvm-mirroring/intro.html), that one actually requires a 3rd disk for storing the LVM log. This makes the setup completely unusable on 2 disks installations, and lowers the amount of used mirror disks on higher amount of disks. Can anyone comment the above facts, and let me know his experience of using LVM mirroring? Thanks.

    Read the article

  • Samba as a PDC and offline authentication

    - by Aimé Barteaux
    Say I have a Windows laptop which has been connected to a domain. The domain has a Samba server as a PDC. Now say that I move the laptop outside of the network (the network is completely inaccessible). Will I be able to logon into accounts I have accessed before on the laptop (through GINA)? Update: Looking at the smb.comf documentation I noticed the setting winbind offline logon: This parameter is designed to control whether Winbind should allow to login with the pam_winbind module using Cached Credentials. If enabled, winbindd will store user credentials from successful logins encrypted in a local cache.. To me it looks like this solves the issue but can anyone else confirm it and/or point out if any additional values need to be set?

    Read the article

  • Time complexity to fill hash table (homework)?

    - by Heathcliff
    This is a homework question, but I think there's something missing from it. It asks: Provide a sequence of m keys to fill a hash table implemented with linear probing, such that the time to fill it is minimum. And then Provide another sequence of m keys, but such that the time fill it is maximum. Repeat these two questions if the hash table implements quadratic probing I can only assume that the hash table has size m, both because it's the only number given and because we have been using that letter to address a hash table size before when describing the load factor. But I can't think of any sequence to do the first without knowing the hash function that hashes the sequence into the table. If it is a bad hash function, such that, for instance, it hashes every entry to the same index, then both the minimum and maximum time to fill it will take O(n) time, regardless of what the sequence looks like. And in the average case, where I assume the hash function is OK, how am I suppossed to know how long it will take for that hash function to fill the table? Aren't these questions linked to the hash function stronger than they are to the sequence that is hashed? As for the second question, I can assume that, regardless of the hash function, a sequence of size m with the same key repeated m-times will provide the maximum time, because it will cause linear probing from the second entry on. I think that will take O(n) time. Is that correct? Thanks

    Read the article

  • How do CUDA devices handle immediate operands?

    - by Jack Lloyd
    Compiling CUDA code with immediate (integer) operands, are they held in the instruction stream, or are they placed into memory? Specifically I'm thinking about 24 or 32 bit unsigned integer operands. I haven't been able to find information about this in any of the CUDA documentation I've examined so far. So references to any documents on specific uarch details like this would be perfect, as I don't currently have a good model for how CUDA works at this level.

    Read the article

  • SQL Server INSERT, Scope_Identity() and physical writing to disc

    - by TheBlueSky
    Hello everyone, I have a stored procedure that does, among other stuff, some inserts in different table inside a loop. See the example below for clearer understanding: INSERT INTO T1 VALUES ('something') SET @MyID = Scope_Identity() ... some stuff go here INSERT INTO T2 VALUES (@MyID, 'something else') ... The rest of the procedure These two tables (T1 and T2) have an IDENTITY(1, 1) column in each one of them, let's call them ID1 and ID2; however, after running the procedure in our production database (very busy database) and having more than 6250 records in each table, I have noticed one incident where ID1 does not match ID2! Although normally for each record inserted in T1, there is record inserted in T2 and the identity column in both is incremented consistently. The "wrong" records were something like that: ID1 Col1 ---- --------- 4709 data-4709 4710 data-4710 ID2 ID1 Col1 ---- ---- --------- 4709 4710 data-4709 4710 4709 data-4710 Note the "inverted", ID1 in the second table. Knowing not that much about SQL Server underneath operations, I have put the following "theory", maybe someone can correct me on this. What I think is that because the loop is faster than physically writing to the table, and/or maybe some other thing delayed the writing process, the records were buffered. When it comes the time to write them, they were wrote in no particular order. Is that even possible if no, how to explain the above mentioned scenario? If yes, then I have another question to rise. What if the first insert (from the code above) got delayed? Doesn't that mean I won't get the correct IDENTITY to insert into the second table? If the answer of this is also yes, what can I do to insure the insertion in the two tables will happen in sequence with the correct IDENTITY? I appreciate any comment and information that help me understand this. Thanks in advance.

    Read the article

  • XMLHttpRequst return null on Chrome

    - by BoltBait
    I have the following code that works fine in IE: <HTML> <BODY> <script language="JavaScript"> text=""; req = new XMLHttpRequest(); if (req) { req.onreadystatechange = processStateChange; req.open("GET", "http://www.boltbait.com", true); req.send(); } function processStateChange() { // is the data ready for use? if (req.readyState == 4) { // process my data alert(req.status); alert(req.responseText); } } </script> </BODY> </HTML> In IE, the first alert returns 200, the second returns the web page. However, in Chrome the first alert returns 0 and the second returns the empty string. My intent is to grab a web page into a string for processing. If I'm not doing this right, how should I be doing this? Thanks.

    Read the article

  • How to study design patterns?

    - by Alien01
    I have read around 4-5 books on design patterns, but still I dont feel I have come closer to intermediate level in design patterns? How should I go studying design patterns? Is there any best book for design pattern? I know this will come only with experience but there must be some way to master these?

    Read the article

  • I have a WPF/Silverlight ListView whose height is unpredictable and too high. How do I control it be

    - by Rob Perkins
    I have a ListView element with a DataTemplate for each ListViewItem defined as follows. When run, the ListView's height is not collapsed onto the items in the view, which is undesirable behavior: <DataTemplate x:Key="LicenseItemTemplate"> <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <TextBlock Grid.Row="0" Text="{Binding company}"></TextBlock> <Grid Grid.Row="1" Style="{StaticResource HiddenWhenNotSelectedStyle}"> <Grid.RowDefinitions> <RowDefinition /> </Grid.RowDefinitions> <Button Grid.Row="0">ClickIt</Button> </Grid> </Grid> </DataTemplate> The second row of the outer grid has a style applied which looks like this. The purpose of the style is to : <Style TargetType="{x:Type Grid}" x:Key="HiddenWhenNotSelectedStyle" > <Style.Triggers> <DataTrigger Binding="{Binding Path=IsSelected, RelativeSource={ RelativeSource Mode=FindAncestor, AncestorType={x:Type ListViewItem} } }" Value="False"> <Setter Property="Grid.Visibility" Value="Collapsed" /> </DataTrigger> <DataTrigger Binding="{Binding Path=IsSelected, RelativeSource={ RelativeSource Mode=FindAncestor, AncestorType={x:Type ListViewItem} } }" Value="True"> <Setter Property="Grid.Visibility" Value="Visible" /> </DataTrigger> </Style.Triggers> </Style> The ListView renders like this: The desired appearance is this, when none of the elements are selected: ...with, of course, the ListView's height adjusting to accommodate the additional content when the second grid is made visible by selection. What can I do to get the desired behavior?

    Read the article

  • Some specific questions about object oriented and MVC design.

    - by Samn
    I have two objects, Users and Mail. Users create Mail objects and send them to other users. If I wanted to get all mail for a User, I could create a method like GetMail() that would return an array of Mail objects owned by that User. But if I wanted to get all mail across the system, what "type" of object would be responsible for that? To solve this problem, I usually create a Manager, which is an object responsible for dealing with a collection of a particular type of object. MailManager deals with collections of Mail objects. GetMailForUser() is one method, GetAllMail() is another method. The User objects invokes the MailManager and executes GetMailForUser(me). Is this stupid? When a user executes the controller CreateMail, a new instance of the Mail object is created. The Mail object, seeing it is creating a new Mail of type 'sent', decides to go ahead and create a second Mail object for the recipient, of type 'received'. Creating one Mail object triggers the creation of a second Mail object. Is this stupid? Should the controller have created both Mail objects, or just the first 'sent' one? When two Users are friends, the association is stored in a table of Relationships. I use a simple object for Relationships. A RelationshipManager has a method called GetFriendsForUser(). The User object has a method GetFriends(), which invokes the RelationshipManager. Is this stupid?

    Read the article

  • Can I Do a Foreach on a TimeSpan by Timespan Type?

    - by IPX Ares
    I have a requirement that regardless of the start and dates that I need to loop through that timespan and calculate figures at the month level. I cannot seem to figure it out, and maybe it is not possible, but I would like to do something like: FOREACH Month As TimeSpan in ContractRange.Months Do Calculations (Month.Start, Month.End) NEXT Is this possible or do I need to calculate the number of months, and just iterate through the amount of months and calculate the start/end of that month based on my index?

    Read the article

  • I did my own web framework: now, how keep it sync with applications? must I use versions?

    - by Daniel Koch
    ... and I did the first web application using it, now I'm going to create the second. In this first web application I enhanced the framework's core library with new things and promptly updated framework branch. I'm using bazaar to keep framework and web application committed. The application was in the beginning, a full branch of framework source tree, now I'm updating framework manually at every change on core files. (copying changed files from web app to framework's branch). With this second web application that I'm going to create, I need to know about versions (or revisions) which the application is based. If I found a bug in this version I can fix and then sync files with first web application no worrying: functions will be the same to this application. If I'm going to make changes in core (new behavior, new functions in library or something new in source tree) it must be named as "new version". What's the best way to do this? Because I'm using a Distributed Version Control System (bazaar), I'm not dealing with VERSIONS, but revision numbers that change every time. Please fresh my mind with new ideas.

    Read the article

  • mysql subquery strangely slow

    - by aviv
    I have a query to select from another sub-query select. While the two queries look almost the same the second query (in this sample) runs much slower: SELECT user.id ,user.first_name -- user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); This query takes 1.2s Explain on this query results: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user index first_name 152 141192 Using where; Using index 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where The second query: SELECT -- user.id -- user.first_name user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); Takes 45sec to run, with explain: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user ALL 141192 Using where 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where Why is it slower if i query only by index fields? Why both queries scans the full length of the user table? Any ideas how to improve? Thanks.

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >