Search Results

Search found 16987 results on 680 pages for 'second'.

Page 356/680 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Optimistic non-locking copy of InnoDB .frm files

    - by jothir
    MySQL Enterprise Backup(MEB) does hot backup of innodb data and log files. Till MEB 3.6.1, the user backs up the only innodb tables in a 3 step process: STEP 1. Take backup using --only-innodb option STEP 2. Temporarily make the table read only by executing “FLUSH TABLES WITH READ LOCK” MEB 3.7.0 has an enhancement to innodb file copying. The .frm files gets copied along with the hot backup done for innodb files. I would like to make the blog a little interactive by explaining the feature as answers: 1. What are these .frm files? The files containing the metadata, such as the table definition, of a MySQL table. For backups, the full set of .frm files are always required along with the backup data, to be able to restore tables that are altered or dropped after the backup. 2. Can the .frm files not be copied by MEB itself? --only-innodb-with-frm is the new option introduced in MEB 3.7.1 to do a copy of .frm files without locking the tables during backup operation itself. This is to reduce the pain of manually copying the .frm files. The option is intended for backups where you can ensure that no ALTER TABLE, CREATE TABLE, DROP TABLE, or other DDL statements modify the .frm files for InnoDB tables during the backup operation. 3. How is data consistency ensured? MEB does validation of the .frm files after copying by comparing with the server directory to see if the timestamps of any of the .frm files is greater than the saved system time (check .frm time).  This change in timestamp of the .frm files will show if a table is altered during the process of backup. The total number of frm files in the server directory is also verified against the copied contents. If the number of .frm files is less compared to server directory, it shows that table/tables have been dropped during the process of backup. If the number of .frm files is more compared to server directory, it shows that new table/tables have been created during backup operation. 4. How does MEB handle data inconsistency? MEB copies the .frm files through several iterations,  does the validation and throws a WARNING if there is any inconsistency found in .frm files at the end of backup operation. This means the user is warned of some DDL operations that had occurred during backup operation, and has to manually copy the .frm files or do a backup again. 5. What is the option and explain its usage? The option introduced is --only-innodb-with-frm which does optimistic copy of .frm files without locking. This can be used when the user wants to backup only innodb tables along with .frm files. The option can take one of the 2 values: all | related. --only-innodb-with-frm=all does copy of all .frm files of all innodb tables. --only-innodb-with-frm=related works in conjunction with --include option.This is to allow partial backup of .frm files corresponding to the tables specified in --include. Let me show the usage with example output: ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll --only-innodb-with-frm=all backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll        --only-innodb-with-frm=all backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          =  /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb/  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrmAll/datadir  innodb_data_home_dir             =  /logs/backupWithFrmAll/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrmAll/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451979804504860 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:36:22 mysqlbackup: INFO: Copying log... 120817 15:36:22 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/ibdata1 (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table2.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table3.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table1.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb/' 120817 15:36:23 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb/ mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:36:25 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrmAll' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrmAll/datadir/innodb1/ table1.frm  table1.ibd  table2.frm  table2.ibd  table3.frm  table3.ibd Here the backup directory contains all the .frm files of all the innodb tables. ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrm --include="innodb1.table3.*" --only-innodb-with-frm=related backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backup371frm        --include=innodb1.table3.* --only-innodb-with-frm=related backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          = /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrm/datadir  innodb_data_home_dir             =  /logs/backupWithFrm/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrm/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451973458118162 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: The --include option specified: innodb1.table3.* mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:25:47 mysqlbackup: INFO: Copying log... 120817 15:25:47 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:25:48 mysqlbackup: INFO: Copying /mysql/trydbibdata1 (Antelope file format). 120817 15:25:49 mysqlbackup: INFO: Copying /mysql/trydbinnodb1/table3.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb' 120817 15:25:49 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:25:51 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrm' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrm/datadir/innodb1/ table3.frm table3.ibd Thus the backup directory contains only the .frm file matching the innodb table name specified in --include option. In a nutshell, we present our great new option --only-innodb-with-frm which is a true hot InnoDB-only backup with .frm files, but with an additional check, if any DDL happened during the backup. If a DDL has happened, the DBA can decide if to repeat the backup, or to live with the potential inconsistency. This is the ideal solution for users that have all their "real" data in InnoDB and seldom change their schemas. You may also like: http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/backup-partial-options.html   STEP 3. Manually copy the .frm files of innodb tables to the destination directory where backup is stored.

    Read the article

  • How to SET ARITHABORT ON for connections in Linq To SQL

    - by Laurence
    By default, the SQL connection option ARITHABORT is OFF for OLEDB connections, which I assume Linq To SQL is using. However I need it to be ON. The reason is that my DB contains some indexed views, and any insert/update/delete operations against tables that are part of an indexed view fail if the connection does not have ARITHABORT ON. Even selects against the indexed view itself fail if the WITH(NOEXPAND) hint is used (which you have to use in SQL Standard Edition to get the performance benefit of the indexed view). Is there somewhere in the data context I can specify I want this option ON? Or somewhere in code I can do it?? I have managed a clumsy workaround, but I don't like it .... I have to create a stored procedure for every select/insert/update/delete operation, and in this proc first run SET ARITHABORT ON, then exec another proc which contains the actual select/insert/update/delete. In other words the first proc is just a wrapper for the second. It doesn't work to just put SET ARITHABORT ON above the select/insert/update/delete code.

    Read the article

  • Would an ORM have any way of determining that a SQLite column contains date-times or booleans?

    - by DanM
    I've been thinking about using SQLite for my next project, but I'm concerned that it seems to lack proper datetime and bit data types. If I use DbLinq (or some other ORM) to generate C# classes, will the data types of the properties be "dumbed down"? Will date-time data be placed in properties of type string or double? Will boolean data be placed in properties of type int? If yes, what are the implications? I'm envisioning a scenario where I need to write a whole second layer of classes with more specific data types and do a bunch of transformations and casts, but maybe it's not as bad as I fear. If you have any experience with this or a similar scenario, how did you handle it?

    Read the article

  • iPhone status bar orientation with opengl

    - by sfider
    I have opengl only view that displays in portrait and landscape mode using projection matrix (view's transformation is identity all the time). I need to show status bar with proper orientation. I do this by setting status bar orientation property in UIApplication and changing frame of opengl view so the view won't go under status bar. When I change from landscape to portrait (landscape is the initial state) view's frame is set to (0, 20, 320, 460) and stays like this. However view appears to be translated by (-10, -10). It seems that I did change the size of view but couldn't move it. Weird things are: initialy view is full screen, I change it to (0, 0, 300, 480) (landscape with status bar) and it works then, it doesn't work when I try to chenge it for the second time (portrait with status bar) frame property of the view shows that view is placed correctly Any thoughts on what can by the problem?

    Read the article

  • T-SQL SQL Server - Stored Procedure with parameter

    - by Ricardo Conte
    Please, the first TSQL works FINE, the second does not. I guess it must be a simple mistake, since I am not used to T-SQL. Thank you for the answers. R Conte. * WORKS FINE ******************* (parm hard-coded) ALTER PROCEDURE rconte.spPesquisasPorStatus AS SET NOCOUNT ON SELECT pesId, RTRIM(pesNome), pesStatus, pesPesGrupoRespondente, pesPesQuestionario, pesDataPrevistaDisponivel, pesDataPrevistaEncerramento, pesDono FROM dbo.tblPesquisas WHERE (pesStatus = 'dis') ORDER BY pesId DESC RETURN Running [rconte].[spPesquisasPorStatus]. pesId Column1 pesStatus pesPesGrupoRespondente pesPesQuestionario pesDataPrevistaDisponivel pesDataPrevistaEncerramento pesDono 29 XXXXXXXXX xxxxx dis 17 28 5/5/2010 08:21:12 5/5/2010 08:21:12 1 28 Xxxxxxxx xxxxxxxxxxxxx dis 16 27 5/5/2010 07:44:12 5/5/2010 07:44:12 1 27 Xxxxxxxxxxxxxxxxxxxxxxx * DOES NOT WORK ************** (using a parm; pesStatus is nchar(3)) ALTER PROCEDURE rconte.spPesquisasPorStatus (@pPesStatus nchar(3) = 'dis') AS SET NOCOUNT ON SELECT pesId, RTRIM(pesNome), pesStatus, pesPesGrupoRespondente, pesPesQuestionario, pesDataPrevistaDisponivel, pesDataPrevistaEncerramento, pesDono FROM dbo.tblPesquisas WHERE (pesStatus = @pPesStatus) ORDER BY pesId DESC RETURN Running [rconte].[spPesquisasPorStatus] ( @pPesStatus = 'dis' ). pesId Column1 pesStatus pesPesGrupoRespondente pesPesQuestionario pesDataPrevistaDisponivel pesDataPrevistaEncerramento pesDono No rows affected. (0 row(s) returned) @RETURN_VALUE = 0 Finished running [rconte].[spPesquisasPorStatus]

    Read the article

  • Comparing images using SIFT

    - by Luís Fernando
    I'm trying to compare 2 images that are taken from a digital camera. Since there may be movement on the camera, I want to first make the pictures "match" and then compare (using some distant function). To match them, I'm thinking about cropping the second picture and using SIFT to find it inside the first picture... it will probably have a small difference on scale/translation/rotation so then I'd need to find the transformation matrix that converts image 1 to image 2 (based on points found by SIFT) any ideas on how to do that (or I guess that's a common problem that may have some opensource implementation?)? thanks

    Read the article

  • T-SQL Picking up active IDs from a comma seperated IDs list

    - by hammayo
    I have two tables "Product" having following structure: ProductID,ProductName, IsSaleTypeA, IsSaleTypeB, IsSaleTypeC 1, AAA, N, N, N 2, BBB, N, Y, N -- active 3, CCC, N, N, N 4, DDD, Y, N, N -- active 5, EEE, N, N, N 6, FFF, N, N, N 7, FFE, N, N, N 8, GGG, N, N, N 9, HHH, Y, N, N -- active The second table "ProductAllowed" having following structure where ProductIDs is a comma separated string filed having mix of active and inactive product ids based on their IsSaleType mode. ProductCode, ProductIDs AMRLSPN, "1,2" AMRLOFD, "1,3" BLGHVF, "2,4,6" BLGHVO, "2,4" BLGHVD, "3,5" BLGSDO, "0" CHOHVF, "1,6" CHOHVP, "1,2,7,8" ... ... Q: Is there a t-sql query that will return a list of active records from the "ProductAllowed" table if any of three IsSaleType fileds is/are switched on for a product. Based on the sample data the ProductAllowed records should return following records. AMRLSPN BLGHVF BLGHVO BLGSDO CHOHVP This needs to be applied in a SQLSERVER 2000 database containing aprox 150000 records. Thanks

    Read the article

  • Trouble Extending Network with Apple Airport Extreme (PC won't connect with Ethernet)

    - by 0x783czar
    So I have an Apple Airport Extreme base station that I use to create a wifi network at my house. But on a separate story from this station i have a PC (running Windows 7) that does not have a wireless card. Luckily, I have another Airport Extreme Base Station, so I figured I'd have my second station "extend" the existing network. I asked Apple if this was possible, which it was, and walked through the setup wizard to extend the network. Then I ran an ethernet cable from the Station to my PC. However the PC refuses to connect to the Internet. It says it can access the network, "Unidentified Network (Limited Connectivity)", but that's it. It tells me that my computer does not have an IP address. I tried running the cable to another computer (my Apple MacBook Pro) and got a similar error. Any thoughts?

    Read the article

  • Relationship Modelling in Core Data

    - by Stevie
    Hi there, I'm fairly new to Objective C and Core Data and have a problem designing a case where players team up one-on-one and have multiple matches that end up with a specific result. With MySQL, I would have a Player table (player primary key, name) and a match table (player A foreign key, player B foreign key, result). Now how do I do this with Core Data? I can easily tie a player entity to a match entity using a relationship. But how do I model the inverse direction for the second player ref. in the match entity? Player Name: Attribute Match: Relationship Match Match Result: Attribute PlayerA: Relationship to Player (<- Inverse to Player.Match) PlayerB: Relationship to Player (<- Inverse to ????) Would be great if someone could give me an idea on this! Thanks, Stevie.

    Read the article

  • Multiple forms in delphi

    - by Hendriksen123
    In my Delphi Project i want to have a 'Settings' button that when clicked, opens a second form (i think this is the correct term, i essentially want a new window to open) for settings. When the user has finished changing the settings on this new form, i want the form to close on a button click. The settings the user types in will also need to be accessible to the first, 'main' form. So, for example, if my programme consisted of a main form, that calculated 'A' + 'B' (A and B being integer variables), with the settings form allowing the user to set values for A and B, how would i do this?

    Read the article

  • git: how to squash the first two commits?

    - by kch
    With git rebase --interactive <commit> you can squash any number of commits together into a single one. It's an OCD heaven. And that's all great unless you want to squash commits into the initial commit. That seems impossible to do. Any way to achieve it? Moderately related: In a related question, I managed to come up with a different approach to the need of squashing against the first commit, which is, well, to make it the second one. If you're interested: git: how to insert a commit as the first, shifting all the others?

    Read the article

  • Linq and Lamba Expressions - while walking a selected list perform an action

    - by Prescott
    Hey, I'm very new to linq and lamba expressions. I'm trying to walk a collection, and when I find an item that meets some criteria I'd like to add that to another separate collection. My linq to walk the collection looks like this (this works fine): From i as MyCustomItem In MyCustomItemCollection Where i.Type = "SomeType" Select i I need each of the select items to then be added to a ListItemCollection, I know I can assign that linq query to a variable, and then do a for each loop adding a new ListItem to the collection, but I'm trying o find a way to add each item to the new ListItemcollection while walking, not a second loop. Thanks ~P

    Read the article

  • Neural Network Always Produces Same/Similar Outputs for Any Input

    - by l33tnerd
    I have a problem where I am trying to create a neural network for Tic-Tac-Toe. However, for some reason, training the neural network causes it to produce nearly the same output for any given input. I did take a look at Artificial neural networks benchmark, but my network implementation is built for neurons with the same activation function for each neuron, i.e. no constant neurons. To make sure the problem wasn't just due to my choice of training set (1218 board states and moves generated by a genetic algorithm), I tried to train the network to reproduce XOR. The logistic activation function was used. Instead of using the derivative, I multiplied the error by output*(1-output) as some sources suggested that this was equivalent to using the derivative. I can put the Haskell source on HPaste, but it's a little embarrassing to look at. The network has 3 layers: the first layer has 2 inputs and 4 outputs, the second has 4 inputs and 1 output, and the third has 1 output. Increasing to 4 neurons in the second layer didn't help, and neither did increasing to 8 outputs in the first layer. I then calculated errors, network output, bias updates, and the weight updates by hand based on http://hebb.mit.edu/courses/9.641/2002/lectures/lecture04.pdf to make sure there wasn't an error in those parts of the code (there wasn't, but I will probably do it again just to make sure). Because I am using batch training, I did not multiply by x in equation (4) there. I am adding the weight change, though http://www.faqs.org/faqs/ai-faq/neural-nets/part2/section-2.html suggests to subtract it instead. The problem persisted, even in this simplified network. For example, these are the results after 500 epochs of batch training and of incremental training. Input |Target|Output (Batch) |Output(Incremental) [1.0,1.0]|[0.0] |[0.5003781562785173]|[0.5009731800870864] [1.0,0.0]|[1.0] |[0.5003740346965251]|[0.5006347214672715] [0.0,1.0]|[1.0] |[0.5003734471544522]|[0.500589332376345] [0.0,0.0]|[0.0] |[0.5003674110937019]|[0.500095157458231] Subtracting instead of adding produces the same problem, except everything is 0.99 something instead of 0.50 something. 5000 epochs produces the same result, except the batch-trained network returns exactly 0.5 for each case. (Heck, even 10,000 epochs didn't work for batch training.) Is there anything in general that could produce this behavior? Also, I looked at the intermediate errors for incremental training, and the although the inputs of the hidden/input layers varied, the error for the output neuron was always +/-0.12. For batch training, the errors were increasing, but extremely slowly and the errors were all extremely small (x10^-7). Different initial random weights and biases made no difference, either. Note that this is a school project, so hints/guides would be more helpful. Although reinventing the wheel and making my own network (in a language I don't know well!) was a horrible idea, I felt it would be more appropriate for a school project (so I know what's going on...in theory, at least. There doesn't seem to be a computer science teacher at my school). EDIT: Two layers, an input layer of 2 inputs to 8 outputs, and an output layer of 8 inputs to 1 output, produces much the same results: 0.5+/-0.2 (or so) for each training case. I'm also playing around with pyBrain, seeing if any network structure there will work. Edit 2: I am using a learning rate of 0.1. Sorry for forgetting about that. Edit 3: Pybrain's "trainUntilConvergence" doesn't get me a fully trained network, either, but 20000 epochs does, with 16 neurons in the hidden layer. 10000 epochs and 4 neurons, not so much, but close. So, in Haskell, with the input layer having 2 inputs & 2 outputs, hidden layer with 2 inputs and 8 outputs, and output layer with 8 inputs and 1 output...I get the same problem with 10000 epochs. And with 20000 epochs. Edit 4: I ran the network by hand again based on the MIT PDF above, and the values match, so the code should be correct unless I am misunderstanding those equations. Some of my source code is at http://hpaste.org/42453/neural_network__not_working; I'm working on cleaning my code somewhat and putting it in a Github (rather than a private Bitbucket) repository. All of the relevant source code is now at https://github.com/l33tnerd/hsann.

    Read the article

  • Request is not available in this context

    - by Vishal Seth
    I'm running IIS 7 Integrated mode and I'm getting Request is not available in this context when I try to access it in a Log4Net related function that is called from Application_Start. This is the line of code I've if (HttpContext.Current != null && HttpContext.Current.Request != null) and an exception is being thrown for second comparison. What else can I check other than checking HttpContext.Current.Request for null?? A similar question is posted @ http://stackoverflow.com/questions/2056398/request-is-not-available-in-this-context-exception-when-runnig-mvc-on-iis7-5 but no relevant answer there either.

    Read the article

  • jQuery ready function

    - by Darren Tarrant
    Can anyone tell me why the document ready function needs a call to function first please? I've been told that the setTimeout in the first below example (which does not work) would be evaluated and passed to ready, but I don't see what the difference would be for the function call in the second example (which works)? $(document).ready( setTimeout( function(){ $('#set_3').innerfade({ animationtype: 'fade', speed: 'slow', timeout: 3000, type: 'sequence', containerheight: '180' }); }, 2000); ); $(document).ready( function(){ setTimeout( function(){ $('#set_3').innerfade({ animationtype: 'fade', speed: 'slow', timeout: 3000, type: 'sequence', containerheight: '180' }); }, 2000); } );

    Read the article

  • In need of a SaaS solution for semantic thesaurus matching

    - by Roy Peleg
    Hello, I'm currently building a web application. In one of it's key processes the application need to match short phrases to other similar ones available in the DB. The application needs to be able to match the phrase: Looking for a second hand car in good shape To other phrases which basically have the same meaning but use different wording, such as: 2nd hand car in great condition needed or searching for a used car in optimal quality The phrases are length limited (say 250 chars), user generated & unstructured. I'm in need of a service / company / some solution which can help / do these connections for me. Can anyone give any ideas? Thanks, Roy

    Read the article

  • jQuery Delay Question

    - by Fuego DeBassi
    <script type="text/javascript"> $(document).ready(function() { $(".module .caption").hide(); $(".module").hover(function() { $(this).find(".caption").slideDown().end().siblings('.module').addClass('under'); },function() { $(this).find(".caption").slideUp().end().siblings('.module').removeClass('under').delay(10000); }); }); </script> This works great, except the .delay doesn't work, is my syntax wrong? I'm just trying to accomplish haveing the .removeClass("under") delayed by a second or two when the mouse un-hovers. I don't want to delay the slideUp. Any ideas?

    Read the article

  • What is going on with the "return fibonacci( number-1 ) + fibonacci( number-2 )"?

    - by user1478598
    I have problem understanding what the return fibonacci( number-1 ) + fibonacci( number-2 ) does in the following program: import sys def fibonacci( number ): if( number <= 2 ): return 1 else: return fibonacci( number-1 ) + fibonacci( number-2 ) The problem is that I can't imagine how this line works: return fibonacci( number-1 ) + fibonacci( number-2 ) Does the both of the "fibonacci( number-1 )" and "fibonacci( number-2 )" being processed at the same time? or the "fibonacci( number-1 )" is the first to be processed and then the second one? I only see that processing both of them would eventually return '1' so the last result I expect to see it is a '1 + 1' = '2' I would appreciate a lot, If someone can elaborately explain the process of its calculation. I think this is a very newb question but I can't really get a picture of its process.

    Read the article

  • Dynamically allocated structure and casting.

    - by Simone Margaritelli
    Let's say I have a first structure like this: typedef struct { int ivalue; char cvalue; } Foo; And a second one: typedef struct { int ivalue; char cvalue; unsigned char some_data_block[0xFF]; } Bar; Now let's say I do the following: Foo *pfoo; Bar *pbar; pbar = new Bar; pfoo = (Foo *)pbar; delete pfoo; Now, when I call the delete operator, how much memory does it free? sizeof(int) + sizeof(char) Or sizeof(int) + sizeof(char) + sizeof(char) * 0xFF ? And if it's the first case due to the casting, is there any way to prevent this memory leak from happening? Note: please don't answer "use C++ polymorphism" or similar, I am using this method for a reason.

    Read the article

  • Distributed datastore

    - by Julien Genestoux
    We're trying to add some kind of persistence in our app. The app generates about 250 entries per second. Each of these entries belong to one of 2M files. For each file, we want to keep the last 10 entries, so we can look them up later. The way our client application works : it gets a stream of all the data it fetches the right file (GET) it adds the new content it saves the file back (PUT) We're looking for an efficient way to store this data that can scale horizontally as the amount of data we're getting is doubling every few weeks. We initially looked at S3. It works fine, but becomes very expensive very fast ($1000 monthly just in PUT operations!) We then gave a shot at Riak. But it seems we can't get more than 60 write/sec on each node, which is very very slow. Any other solution out there?

    Read the article

  • Is there a config option in PHP to prevent undefined constants from being interpreted as strings?

    - by mrbinky3000
    This is from the php manual: http://us.php.net/manual/en/language.constants.syntax.php If you use an undefined constant, PHP assumes that you mean the name of the constant itself, just as if you called it as a string (CONSTANT vs "CONSTANT"). An error of level E_NOTICE will be issued when this happens. I really don't like this behavior. If I have failed to define a required constant, I would rather the script fail so that I am forced define it. Is there any way to force PHP to crash the script if it tries to use an undefined constant? For example. Both of these scripts do the same thing. <?php define('DEBUG',1); if (DEBUG) echo('Yo!'); ?> and <?php if(DEBUG) echo('Yo!'); ?> I would rather the second script DIE and declare that it tried to use an undefined constant DEBUG.

    Read the article

  • Union on two tables with a where clause in the one

    - by Lostdrifter
    Currently I have 2 tables, both of the tables have the same structure and are going to be used in a web application. the two tables are production and temp. The temp table contains one additional column called [signed up]. Currently I generate a single list using two columns that are found in each table (recno and name). Using these two fields I'm able to support my web application search function. Now what I need to do is support limiting the amount of items that can be used in the search on the second table. the reason for this is become once a person is "signed up" a similar record is created in the production table and will have its own recno. doing: Select recno, name from production UNION ALL Select recno, name from temp ...will show me everyone. I have tried: Select recno, name from production UNION ALL Select recno, name from temp WHERE signup <> 'Y' But this returns nothing? Can anyone help?

    Read the article

  • Does copy_from_user modify the user pointer?

    - by Michael
    Does the copy_from_user function, declared in uaccess.h, modify the (void __user *)from pointer? The pointer isn't declared as const in the function declaration, only the contents it points to. The reason I ask is that I want to use copy_from_user twice, with the second copy_from_user copying from the place where the first one finished. I was planning on doing something like this, is it guaranteed to work? //buf is a user pointer that is already defined copy_from_user(my_first_alloced_region, buf, some_size); //do stuff copy_from_user(my_second_alloced_region, buf + some_size, some_other_size); Thanks in advance.

    Read the article

  • Codeigniter redirect repeats controller name in URL

    - by Obay
    This is my controller: class Timesheet extends Controller { ... function index() { //loads view with a form that submits to "timesheet/change_date" } function summary() { //loads view with a form that submits to "timesheet/change_week" } function change_date() { ... redirect('timesheet'); } function change_week() { ... redirect('timesheet/summary'); } ... } The first form is located at http://localhost/dts/index.php/timesheet and when I submit the change_date form, it correctly goes thru the change_date() function and re-loads http://localhost/dts/index.php/timesheet correctly. However, the second form is located at http://localhost/dts/index.php/timesheet/summary, and when I submit the change-week form, it goes thru the change_week() function but goes to http://localhost/dts/index.php/timesheet/timesheet/change_week. Notice the word timesheet is repeated. When I submit the form again, another timesheet is added. What's wrong and how do I improve my code? My .htaccess is below: RewriteEngine on RewriteCond $1 !^(index\.php|webroot|robots\.txt) RewriteRule ^(.*)$ index.php/$1 [L]

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >