Search Results

Search found 20931 results on 838 pages for 'mysql insert'.

Page 759/838 | < Previous Page | 755 756 757 758 759 760 761 762 763 764 765 766  | Next Page >

  • A way for a file to have its own MD5 inside? Or a string that is it's own MD5?

    - by Eli
    Hi all, In considering several possible solutions to a recent task, I found myself considering how to get a php file that includes it's own MD5 hash. I ended up doing something else, but the question stayed with me. Something along the lines of: <?php echo("Hello, my MD5 is [MD5 OF THIS FILE HERE]"); ?> Whatever placeholder you have in the file, the second you take its MD5 and insert it, you've changed it, which changes it's MD5, etc. Edit: Perhaps I should rephrase my question: Does anyone know if it has been proven impossible, or if there has been any research on an algorithm that would result in a file containing it's own MD5 (or other hash)? I suppose if the MD5 was the only content in the file, then the problem can be restated as how to find a string that is it's own MD5. It may well be impossible for us to create a process that will result in such a thing, but I can't think of any reason the solution itself can't exist. The question is basically whether it really is impossible, simply improbable (on the order of monkeys randomly typing Shakespeare), or actually solvable by somebody smarter than myself.

    Read the article

  • Importing fixtures with foreign keys and SQLAlchemy?

    - by Chris Reid
    I've been experimenting with using fixture to load test data sets into my Pylons / PostgreSQL app. This works great except that it fails to properly create foreign keys if they reference an auto increment id field. My fixture looks like this: class AuthorsData(DataSet): class frank_herbert: first_name = "Frank" last_name = "Herbert" class BooksData(DataSet): class dune: title = "Dune" author_id = AuthorsData.frank_herbert.ref('id') And the model: t_authors = sa.Table("authors", meta.metadata, sa.Column("id", sa.types.Integer, primary_key=True), sa.Column("first_name", sa.types.String(100)), sa.Column("last_name", sa.types.String(100)), ) t_books = sa.Table("books", meta.metadata, sa.Column("id", sa.types.Integer, primary_key=True), sa.Column("title", sa.types.String(100)), sa.Column("author_id", sa.types.Integer, sa.ForeignKey('authors.id')) ) When running "paster setup-app development.ini", SQLAlchemey reports the FK value as "None" so it's obviously not finding it: 15:59:48,683 INFO [sqlalchemy.engine.base.Engine.0x...9eb0] INSERT INTO books (title, author_id) VALUES (%(title)s, %(author_id)s) RETURNING books.id 15:59:48,683 INFO [sqlalchemy.engine.base.Engine.0x...9eb0] {'author_id': None, 'title': 'Dune'} The fixture docs actually warn that this might be a problem: "However, in some cases you may need to reference an attribute that does not have a value until it is loaded, like a serial ID column. (Note that this is not supported by the SQLAlchemy data layer when using sessions.)" http://farmdev.com/projects/fixture/using-dataset.html#referencing-foreign-dataset-classes Does this mean that this is just not supported with SQLAlchemy? Or is it possible to load the data without using SA "sessions"? How are other people handling this issue?

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Simple imeplementation of admin/staff panel?

    - by Michael Mao
    Hi all: A new project requires a simple panel(page) for admin and staff members that : Preferably will not use SSL or any digital ceritification stuff, a simple login from via http will just be fine. has basic authentication which allows only admin to login as admin, and any staff member as of the group "staff". Ideally, the "credentials(username-hashedpassword pair)" will be stored in MySQL. is simple to configure if there is a package, or the strategy is simple to code. somewhere (PHP session?) somehow (include a script at the beginning of each page to check user group before doing anything?), it will detect any invalid user attempt to access protected page and redirect him/her to the login form. while still keeps high quality in security, something I worry about the most. Frankly I am having little knowledge about Internet security, and how modern CMS such as WordPress/Joomla do with their implementation in this. I only have one thing in my mind that I need to use a salt to hash the password (SHA1?) to make sure any hacker gets the username and password pair across the net cannot use that to log into the system. And that is what the client wants to make sure. But I really not sure where to start, any ideas? Thanks a lot in advance.

    Read the article

  • Capturing time intervals when somebody was online? How would you impement this feature?

    - by Kirzilla
    Hello, Our aim is to build timelines saying about periods of time when user was online. (It really doesn't matter what user we are talking about and where he was online) To get information about onliners we can call API method, someservice.com/api/?call=whoIsOnline whoIsOnline method will give us a list of users currently online. But there is no API method to get information about who IS NOT online. So, we should build our timelines using information we got from whoIsOnline. Of course there will be a measurement error (we can't track information in realtime). Let's suppose that we will call whoIsOnline method every 2 minutes (yes, we will run our script by cron every 2 minutes). For example, calling whoIsOnline at 08:00 will return Peter_id Michal_id Andy_id calling whoIsOnline at 08:02 will return Michael_id Andy_id George_id As you can see, Peter has gone offline, but we have new onliner - George. Available instruments are Db(MySQL) / text files / key-value storage (Redis/memcache); feel free to choose any of them (or even all of them). So, we have to get information like this George_id was online... 12 May: 08:02-08:30, 12:40-12:46, 20:14-22:36 11 May: 09:10-12:30, 21:45-23:00 10 May: was not online And now question... How would you store information to implement such timelines? How would you query/calculate information about periods of time when user was online? Additional information.. You cannot update information about offline users, only users who are "currently" online. Solution should be flexible: timeline information could be represented relating to any timezone. We should keep information only for last 7 days. Every user seen online is automatically getting his own identifier in our database. Uff.. it was really hard for me to write it because my English is pretty bad, but I hope my question will be clear for you. Thank you.

    Read the article

  • SQL Server Multi-statement UDF - way to store data temporarily required

    - by Kharlos Dominguez
    Hello, I have a relatively complex query, with several self joins, which works on a rather large table. For that query to perform faster, I thus need to only work with a subset of the data. Said subset of data can range between 12 000 and 120 000 rows depending on the parameters passed. More details can be found here: http://stackoverflow.com/questions/3054843/sql-server-cte-referred-in-self-joins-slow As you can see, I was using a CTE to return the data subset before, which caused some performance problems as SQL Server was re-running the Select statement in the CTE for every join instead of simply being run once and reusing its data set. The alternative, using temporary tables worked much faster (while testing the query in a separate window outside the UDF body). However, when I tried to implement this in a multi-statement UDF, I was harshly reminded by SQL Server that multi-statement UDFs do not support temporary tables for some reason... UDFs do allow table variables however, so I tried that, but the performance is absolutely horrible as it takes 1m40 for my query to complete whereas the the CTE version only took 40minutes. I believe the table variables is slow for reasons listed in this thread: http://stackoverflow.com/questions/1643687/table-variable-poor-performance-on-insert-in-sql-server-stored-procedure Temporary table version takes around 1 seconds, but I can't make it into a function due to the SQL Server restrictions, and I have to return a table back to the caller. Considering that CTE and table variables are both too slow, and that temporary tables are rejected in UDFs, What are my options in order for my UDF to perform quickly? Thanks a lot in advance.

    Read the article

  • How to improve performance of non-scalar aggregations on denormalized tables

    - by The Lazy DBA
    Suppose we have a denormalized table with about 80 columns, and grows at the rate of ~10 million rows (about 5GB) per month. We currently have 3 1/2 years of data (~400M rows, ~200GB). We create a clustered index to best suit retrieving data from the table on the following columns that serve as our primary key... [FileDate] ASC, [Region] ASC, [KeyValue1] ASC, [KeyValue2] ASC ... because when we query the table, we always have the entire primary key. So these queries always result in clustered index seeks and are therefore very fast, and fragmentation is kept to a minimum. However, we do have a situation where we want to get the most recent FileDate for every Region, typically for reports, i.e. SELECT [Region] , MAX([FileDate]) AS [FileDate] FROM HugeTable GROUP BY [Region] The "best" solution I can come up to this is to create a non-clustered index on Region. Although it means an additional insert on the table during loads, the hit isn't minimal (we load 4 times per day, so fewer than 100,000 additional index inserts per load). Since the table is also partitioned by FileDate, results to our query come back quickly enough (200ms or so), and that result set is cached until the next load. However I'm guessing that someone with more data warehousing experience might have a solution that's more optimal, as this, for some reason, doesn't "feel right".

    Read the article

  • Creating a framework for ASP.NET web forms similar to Flex states.

    - by Shawn Simon
    I really enjoy the flex states framework. You define a few states for your control, and then can set child controls to only appear in certain states. Check out this code: <s:states> <s:State name="signin"/> <s:State name="register"/> </s:states> <mx:FormItem label="Last name:" includeIn="register" id="lastNameItem" alpha="0.0"> <s:TextInput id="lastName" width="220"/> </mx:FormItem> Now the last name form will only appear in the register screen. This would be really useful I think in .NET where you use the page for views like update / insert. I was considering extending the Page element to have a states property using extension methods, and adding the include in to controls. This way I could auto-hide controls based on the current view at render time. What is even cooler in Flex, is that you can use different handlers / properties based on the current state. <s:Button label="Sign in" label.register="Register" id="loginButton" enabled="true" click.signin="signin()" click.register="register()"/> I'm sure there's a way I could implement something similar to this as well. Do you think this is a good idea? Or does it just add a level of abstraction to framework that already has a poor separation of concerns?

    Read the article

  • Connect from java mobile application to webservice to read messages.

    - by Alexandru Trandafir Catalin
    Hello, I have a website where users can send personal messages between them, now I want them to recieve the messages also on their mobile phone but without having to send them a SMS. I am thinking about providing them with a mobile phone with internet access over GPRS or 3G, then develop a Java application that will connect to the website and retrieve the messages. On the website I am thinking to make a webservice where the phone will login, get new messages, and also be able to answer back to messages. Does anyone know any mobile application tutorial that will do that? Or do you recommend me where to start? I never done a java mobile application before, I only work with websites and PHP. I also tried to use ICQ, the client is already done for java and for iphone, and I've also found a script that will send ICQ messages from PHP, but ICQ server bans you for 20 minutes when you do many reconnections, so I have to develop some kind of ICQ bot always online that will check for new messages to send from the mySQL database and then send them, one per 2-3 seconds, so the server won't ban me for flooding. Well any advice or recommendation is welcome about how to have users connected to the website messaging system from their phones. Thank you!

    Read the article

  • Connect xampp to MongoDB

    - by Jhonny D. Cano -Leftware-
    Hello I have a xampp 1.7.3 instance running and a MongoDB 1.2.4 server on the same machine. I want to connect them, so I basically have been following this tutorial on php.net, it seems to connect but the cursors are always empty. I don't know what am I missing. Here is the code I am trying. The cursor-valid always says false. thanks <?php $m = new Mongo(); // connect try { $m->connect(); } catch (MongoConnectionException $ex) { echo $ex; } echo "conecta..."; $dbs = $m->listDBs(); if ($dbs == NULL) { echo $m->lastError(); return; } foreach($dbs as $db) { echo $db; } $db = $m->selectDB("CDO"); echo "elige bd..."; $col = $db->selectCollection("rep_consulta"); echo "elige col..."; $rangeQuery = array('id' => array( '$gt' => 100)); $col->insert(array('id' => 456745764, 'nombre' => 'cosa')); $cursor = $col->find()->limit(10); echo "buscando..."; var_dump($cursor); var_dump($cursor->valid()); if ($cursor == NULL) echo 'cursor null'; while($cursor->hasNext()) { $item = $cursor->current(); echo "en while..."; echo $item["nombre"].'...'; } ?> doing this by command line works perfect use CDO db.rep_consulta.find() -- lot of data here

    Read the article

  • DQL delete from multiple tables (doctrine)

    - by singer
    Need to perform DQL delete from multple related tables. In SQL it is something like this: DELETE r1,r2 FROM ComRealty_objects r1, com_realty_objects_phones r2 WHERE r1.id IN (10,20) AND r2.id_object IN (10,20) I need to perform this statement using DQL, but I'm stuck on this :( <?php $dql = Doctrine_Query::create() ->delete('phones, comrealtyobjects') ->from('ComRealtyObjects comrealtyobjects') ->from('ComRealtyObjectsPhones phones') ->whereIn("comrealtyobjects.id", $ids) ->whereIn("phones.id_object", $ids); echo($dql->getSqlQuery()); ?> But DQL parser gives me this result: DELETE FROM `com_realty_objects_phones`, `ComRealty_objects` WHERE (`id` IN (?) AND `id_object` IN (?)) Searching google and stack overflow I found this(useful) topic: http://stackoverflow.com/questions/2247905/what-is-the-syntax-for-a-multi-table-delete-on-a-mysql-database-using-doctrine But this is not exactly my case - there was delete from single table. If there is a way to override dql parser behaviour? Or maybe some other way to delete records from multiple tables using doctrine. Note: If you are using doctrine behaviours(Doctrine_Record_Generator) you need first to initialize those tables using Doctrine_Core::initializeModels() to perform DQL operations on them.

    Read the article

  • How to write to a varchar(max) column using ODBC

    - by andyjohnson
    Summary: I'm trying to write a text string to a column of type varchar(max) using ODBC and SQL Server 2005. It fails if the length of the string is greater than 8000. Help! I have some C++ code that uses ODBC (SQL Native Client) to write a text string to a table. If I change the column from, say, varchar(100) to varchar(max) and try to write a string with length greater than 8000, the write fails with the following error [Microsoft][ODBC SQL Server Driver]String data, right truncation So, can anyone advise me on if this can be done, and how? Some example (not production) code that shows what I'm trying to do: SQLHENV hEnv = NULL; SQLRETURN iError = SQLAllocEnv(&hEnv); HDBC hDbc = NULL; SQLAllocConnect(hEnv, &hDbc); const char* pszConnStr = "Driver={SQL Server};Server=127.0.0.1;Database=MyTestDB"; UCHAR szConnectOut[SQL_MAX_MESSAGE_LENGTH]; SWORD iConnectOutLen = 0; iError = SQLDriverConnect(hDbc, NULL, (unsigned char*)pszConnStr, SQL_NTS, szConnectOut, (SQL_MAX_MESSAGE_LENGTH-1), &iConnectOutLen, SQL_DRIVER_COMPLETE); HSTMT hStmt = NULL; iError = SQLAllocStmt(hDbc, &hStmt); const char* pszSQL = "INSERT INTO MyTestTable (LongStr) VALUES (?)"; iError = SQLPrepare(hStmt, (SQLCHAR*)pszSQL, SQL_NTS); char* pszBigString = AllocBigString(8001); iError = SQLSetParam(hStmt, 1, SQL_C_CHAR, SQL_VARCHAR, 0, 0, (SQLPOINTER)pszBigString, NULL); iError = SQLExecute(hStmt); // Returns SQL_ERROR if pszBigString len > 8000 The table MyTestTable contains a single colum defined as varchar(max). The function AllocBigString (not shown) creates a string of arbitrary length. I understand that previous versions of SQL Server had an 8000 character limit to varchars, but not why is this happening in SQL 2005? Thanks, Andy

    Read the article

  • Is a red-black tree my ideal data structure?

    - by Hugo van der Sanden
    I have a collection of items (big rationals) that I'll be processing. In each case, processing will consist of removing the smallest item in the collection, doing some work, and then adding 0-2 new items (which will always be larger than the removed item). The collection will be initialised with one item, and work will continue until it is empty. I'm not sure what size the collection is likely to reach, but I'd expect in the range 1M-100M items. I will not need to locate any item other than the smallest. I'm currently planning to use a red-black tree, possibly tweaked to keep a pointer to the smallest item. However I've never used one before, and I'm unsure whether my pattern of use fits its characteristics well. 1) Is there a danger the pattern of deletion from the left + random insertion will affect performance, eg by requiring a significantly higher number of rotations than random deletion would? Or will delete and insert operations still be O(log n) with this pattern of use? 2) Would some other data structure give me better performance, either because of the deletion pattern or taking advantage of the fact I only ever need to find the smallest item? Update: glad I asked, the binary heap is clearly a better solution for this case, and as promised turned out to be very easy to implement. Hugo

    Read the article

  • Creating and using a static lib in xcode

    - by Alasdair Morrison
    I am trying to create a static library in xcode and link to that static library from another program. So as a test i have created a BSD static C library project and just added the following code: //Test.h int testFunction(); //Test.cpp #include "Test.h" int testFunction() { return 12; } This compiles fine and create a .a file (libTest.a). Now i want to use it in another program so I create a new xcode project (cocoa application) Have the following code: //main.cpp #include <iostream> #include "Testlib.h" int main (int argc, char * const argv[]) { // insert code here... std::cout << "Result:\n" <<testFunction(); return 0; } //Testlib.h extern int testFunction(); I right clicked on the project - add - existing framework - add other Selected the .a file and it added it into the project view. I always get this linker error: Build TestUselibrary of project TestUselibrary with configuration Debug Ld build/Debug/TestUselibrary normal x86_64 cd /Users/myname/location/TestUselibrary setenv MACOSX_DEPLOYMENT_TARGET 10.6 /Developer/usr/bin/g++-4.2 -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.6.sdk -L/Users/myname/location/TestUselibrary/build/Debug -L/Users/myname/location/TestUselibrary/../Test/build/Debug -F/Users/myname/location/TestUselibrary/build/Debug -filelist /Users/myname/location/TestUselibrary/build/TestUselibrary.build/Debug/TestUselibrary.build/Objects-normal/x86_64/TestUselibrary.LinkFileList -mmacosx-version-min=10.6 -lTest -o /Users/myname/location/TestUselibrary/build/Debug/TestUselibrary Undefined symbols: "testFunction()", referenced from: _main in main.o ld: symbol(s) not found collect2: ld returned 1 exit status I am new to macosx development and fairly new to c++. I am probably missing something fairly obvious, all my experience comes from creating dlls on the windows platform. I really appreciate any help.

    Read the article

  • How can I factor out repeated expressions in an SQL Query? Column aliases don't seem to be the ticke

    - by Weston C
    So, I've got a query that looks something like this: SELECT id, DATE_FORMAT(CONVERT_TZ(callTime,'+0:00','-7:00'),'%b %d %Y') as callDate, DATE_FORMAT(CONVERT_TZ(callTime,'+0:00','-7:00'),'%H:%i') as callTimeOfDay, SEC_TO_TIME(callLength) as callLength FROM cs_calldata WHERE customerCode='999999-abc-blahblahblah' AND CONVERT_TZ(callTime,'+0:00','-7:00') >= '2010-04-25' AND CONVERT_TZ(callTime,'+0:00','-7:00') <= '2010-05-25' If you're like me, you probably start thinking that maybe it would improve readability and possibly the performance of this query if I wasn't asking it to compute CONVERT_TZ(callTime,'+0:00','-7:00') four separate times. So I try to create a column alias for that expression and replace further occurances with that alias: SELECT id, CONVERT_TZ(callTime,'+0:00','-7:00') as callTimeZoned, DATE_FORMAT(callTimeZoned,'%b %d %Y') as callDate, DATE_FORMAT(callTimeZoned,'%H:%i') as callTimeOfDay, SEC_TO_TIME(callLength) as callLength FROM cs_calldata WHERE customerCode='5999999-abc-blahblahblah' AND callTimeZoned >= '2010-04-25' AND callTimeZoned <= '2010-05-25' This is when I learned, to quote the MySQL manual: Standard SQL disallows references to column aliases in a WHERE clause. This restriction is imposed because when the WHERE clause is evaluated, the column value may not yet have been determined. So, that approach would seem to be dead in the water. How is someone writing queries with recurring expressions like this supposed to deal with it?

    Read the article

  • error in the syntax of cmd parameters

    - by KATy
    hi guyz in this method i m just adding the values to the db. temp is a object. the field value and variables in the object re havin the same name.. dono y this error s comin plz help me out... public virtual void Save_input_parameter_details(Test_Unit_BLL temp ) { SqlConnection con; con = new SqlConnection("Data Source=VV;Initial Catalog=testingtool;User ID=sa;Password=sa;"); con.Open(); SqlCommand cmd, cmd2, cmd3; //try //{ for (int i = 0; i < temp.No_Input_parameters; i++) { cmd2 = new SqlCommand("insert into Input_parameter_details values(@Input_Parameter_name,@Input_Parameter_datatype,@noparams,@class_code", con); cmd2.Parameters.AddWithValue("@Input_Parameter_datatype", temp.Input_Parameter_datatype[i]); cmd2.Parameters.AddWithValue("@Input_Parameter_name", temp.Input_Parameter_name[i]); cmd2.Parameters.AddWithValue("@noparams", temp.No_Input_parameters); cmd2.Parameters.AddWithValue("@class_code",temp.class_code); cmd2.ExecuteNonQuery(); } //} //catch (Exception ex) // { // MessageBox.Show("error"+ex); // } }

    Read the article

  • How should I manage my many-to-many relationships?

    - by wes
    Hello all, I have a database containing a couple tables: files and users. This relationship is many-to-many, so I also have a table called users_files_ref which holds foreign keys to both of the above tables. Here's the schema of each table: files - file_id, file_name users - user_id, user_name users_files_ref - user_file_ref_id, user_id, file_id I'm using Codeigniter to build a file host application, and I'm right in the middle of adding the functionality that enables users to upload files. This is where I'm running into my problem. Once I add a file to the files table, I will need that new file's id to update the users_files_ref table. Right now I'm adding the record to the files table, and then I imagined I'd run a query to grab the last file added, so that I can get the ID, and then use that ID to insert the new users_files_ref record. I know this will work on a small scale, but I imagine there is a better way of managing these records, especially in a heavy-traffic scenario. I am new to relational database stuff but have been around PHP for a while, so please bear with me here :-) I have primary and foreign keys set up correctly for the files, users, and users_files_ref tables, I'm just wondering how to manage the adding of file records for this scenario? Thanks for any help provided, it's much appreciated. -Wes

    Read the article

  • android odbc connection

    - by Vijay Kumar
    i want to connect odbc connection for my android application. Here in my program i'm using oracle database 11g and my table name is sample. After i run the program close the emulator open the database the values could not be stored. Please give one solution or any changes in my program or connection string. package com.odbc; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import android.app.Activity; import android.os.Bundle; public class OdbcActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); String first="vijay"; String last="kumar"; try { DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver()); Connection con=DriverManager.getConnection("jdbc:oracle:thin:@localshot:1521:XE","system","vijay"); PreparedStatement pst=con.prepareStatement("insert into sample(first,last) values(?,?"); pst.setString(1,first); pst.setString(2,last); pst.executeUpdate(); } catch(Exception e) { System.out.println("Exception:"+e); } } }

    Read the article

  • EOL Special Char not matching

    - by Aurélien Ribon
    Hello, I am trying to find every "a - b, c, d" pattern in an input string. The pattern I am using is the following : "^[ \t]*(\\w+)[ \t]*->[ \t]*(\\w+)((?:,[ \t]*\\w+)*)$" This pattern is a C# pattern, the "\t" refers to a tabulation (its a single escaped litteral, intepreted by the .NET String API), the "\w" refers to the well know regex litteral predefined class, double escaped to be interpreted as a "\w" by the .NET STring API, and then as a "WORD CLASS" by the .NET Regex API. The input is : a -> b b -> c c -> d The function is : private void ParseAndBuildGraph(String input) { MatchCollection mc = Regex.Matches(input, "^[ \t]*(\\w+)[ \t]*->[ \t]*(\\w+)((?:,[ \t]*\\w+)*)$", RegexOptions.Multiline); foreach (Match m in mc) { Debug.WriteLine(m.Value); } } The output is : c -> d Actually, there is a problem with the line ending "$" special char. If I insert a "\r" before "$", it works, but I thought "$" would match any line termination (with the Multiline option), especially a \r\n in a Windows environment. Is it not the case ?

    Read the article

  • Reverse ordered list for jquery submitted comments

    - by g-man
    Hey guys I have one more question lol. I am using a script that allows users to submit comments through jquery ajax, however when they are submitted, the submitted comments submit at the bottom of the other comments which are sorted in descending order (newest on top) when the page first loads (due to mysql query). Is there a way to make it submit on top through some sort of sorting javascript function? function prepare(response) { var d = new Date(); count++; d.setTime(response.time*1000); var mytime = d.getHours()+':'+d.getMinutes()+':'+d.getSeconds(); var string = '<li class="shoutbox-list" id="list-'+count+'">' + '<span class="date">'+mytime+'</span>' + '<span class="shoutbox-list-nick"><a href="statistics.php?user='+response.user+'">'+response.user+'</a>:</span>' + '<span class="msg">'+response.message+'</span>' +'</li>'; return string; } function success(response, status) { if(status == 'success') { lastTime = response.time; $('#daddy-shoutbox-list').append(prepare(response)); $('input[name=message]').attr('value', '').focus(); $('#list-'+count).fadeIn('slow'); timeoutID = setTimeout(refresh, 3000); } } <div id="daddy-shoutbox"> <ol id="daddy-shoutbox-list"></ol> </div>

    Read the article

  • How does PHP PDO work internally ?

    - by Rachel
    I want to use pdo in my application, but before that I want to understand how internally PDOStatement->fetch and PDOStatement->fetchAll. For my application, I want to do something like "SELECT * FROM myTable" and insert into csv file and it has around 90000 rows of data. My question is, if I use PDOStatement->fetch as I am using it here: // First, prepare the statement, using placeholders $query = "SELECT * FROM tableName"; $stmt = $this->connection->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Will after every fetch from database, result for that fetch would be store in memory ? Meaning when I do second fetch, memory would have data of first fetch as well as data for second fetch. And so if I have 90000 rows of data and if am doing fetch every time than memory is being updated to take new fetch result without removing results from previous fetch and so for the last fetch memory would already have 89999 rows of data. Is this how PDOStatement::fetch works ? Performance wise how does this stack up against PDOStatement::fetchAll ?

    Read the article

  • jquery question for events

    - by OM The Eternity
    I have to copy the text from one text box to another using checkbox through jquery I have applied jquery event in my code, it works partially correct.. code is as follows: <html> <head> <script src="js/jquery.js" ></script> </head> <body> <form> <input type="text" name="startdate" id="startdate" value=""/> <input type="text" name="enddate" id="enddate" value=""/> <input type="checkbox" name="checker" id="checker" /> </form> <script> $(document).ready(function(){ $("#startdate").change(function(o){ if($("#checker").is(":checked")){ $("#enddate").val($("#startdate").val()); } }); }); </script> </body> </html> this code works as follows, I always have checkbox checked by default hence whenever i insert the start date and then tab, the start date gets copied to enddate. My Problem but now if uncheck the checkbox and change the start date and then recheck the check box, the start date is not copied, Now what should be done in this situation.. please help me....

    Read the article

  • PDF Form Field Manipulation

    - by 108039818756939362532
    I'm making a web interface to autofill pdf forms with user data from a database. The admin needs to be able to upload a pdf (right now targeted at IRS pdf forms) and then associate the fields in the pdf with data fields in the database. I need a way to help the admin associate the field names (stuff like "topmostSubform[0].Page2[0].p2-t66[0]") with the the data fields in the database. I'm looking for a way to modify the PDF programatically to in some way provide this information. Basically I'm open to suggestions on how I might make the field names appear in an obvious manner on a modified version of the original pdf. The closest I've gotten is being able to insert Tooltips into the fields in the pdf by just editting the raw pdf line by line. However when editting the pdf in this manner the field names are gibberish, and so I can't just use them. An optimal solution would be anything that could automatically parse a pdf and set each field's tooltip to be the fields name. Anything that can be run from the command line, or any python tool, or just a basic how to correctly parse a field's name from a raw pdf file would be amazing.

    Read the article

  • Pseudo code for instruction description

    - by Claus
    Hi, I am just trying to fiddle around what is the best and shortest way to describe two simple instructions with C-like pseudo code. The extract instruction is defined as follows: extract rd, rs, imm This instruction extracts the appropriate byte from the 32-bit source register rs and right justifies it in the destination register. The byte is specified by imm and thus can take the values 0 (for the least-significant byte) and 3 (for the most-significant byte). rd = 0x0; // zero-extend result, ie to make sure bits 31 to 8 are set to zero in the result rd = (rs && (0xff << imm)) >> imm; // this extracts the approriate byte and stores it in rd The insert instruction can be regarded as the inverse operation and it takes a right justified byte from the source register rs and deposits it in the appropriate byte of the destination register rd; again, this byte is determined by the value of imm tmp = 0x0 XOR (rs << imm)) // shift the byte to the appropriate byte determined by imm rd = (rd && (0x00 << imm)) // set appropriate byte to zero in rd rd = rd XOR tmp // XOR the byte into the destination register This looks all a bit horrible, so I wonder if there is a little bit a more elegant way to describe this bahaviour in C-like style ;) Many thanks, Claus

    Read the article

< Previous Page | 755 756 757 758 759 760 761 762 763 764 765 766  | Next Page >