Search Results

Search found 10242 results on 410 pages for 'stored proc'.

Page 182/410 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • mysql db image convert to file

    - by ntan
    Hi i am writing a converter from Oracle to mysql In Oracle the images are stored in db. I want to read the content of the image and save to file system I suppose that i have to read the blob entry and using php file commands create the file (am i right) What about image type. Should i save as jpg (what if the store image is not jpg) Any suggestion are welcome

    Read the article

  • iPhone: Tracking/Identifying individual touches

    - by FlorianZ
    I have a quick question regarding tracking touches on the iPhone and I seem to not be able to come to a conclusion on this, so any suggestions / ideas are greatly appreciated: I want to be able to track and identify touches on the iphone, ie. basically every touch has a starting position and a current/moved position. Touches are stored in a std::vector and they shall be removed from the container, once they ended. Their position shall be updated once they move, but I still want to keep track of where they initially started (gesture recognition). I am getting the touches from [event allTouches], thing is, the NSSet is unsorted and I seem not to be able to identify the touches that are already stored in the std::vector and refer to the touches in the NSSet (so I know which ones ended and shall be removed, or have been moved, etc.) Here is my code, which works perfectly with only one finger on the touch screen, of course, but with more than one, I do get unpredictable results... - (void) touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event { [self handleTouches:[event allTouches]]; } - (void) touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event { [self handleTouches:[event allTouches]]; } - (void) touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event { [self handleTouches:[event allTouches]]; } - (void) touchesCancelled:(NSSet*)touches withEvent:(UIEvent*)event { [self handleTouches:[event allTouches]]; } - (void) handleTouches:(NSSet*)allTouches { for(int i = 0; i < (int)[allTouches count]; ++i) { UITouch* touch = [[allTouches allObjects] objectAtIndex:i]; NSTimeInterval timestamp = [touch timestamp]; CGPoint currentLocation = [touch locationInView:self]; CGPoint previousLocation = [touch previousLocationInView:self]; if([touch phase] == UITouchPhaseBegan) { Finger finger; finger.start.x = currentLocation.x; finger.start.y = currentLocation.y; finger.end = finger.start; finger.hasMoved = false; finger.hasEnded = false; touchScreen->AddFinger(finger); } else if([touch phase] == UITouchPhaseEnded || [touch phase] == UITouchPhaseCancelled) { Finger& finger = touchScreen->GetFingerHandle(i); finger.hasEnded = true; } else if([touch phase] == UITouchPhaseMoved) { Finger& finger = touchScreen->GetFingerHandle(i); finger.end.x = currentLocation.x; finger.end.y = currentLocation.y; finger.hasMoved = true; } } touchScreen->RemoveEnded(); } Thanks!

    Read the article

  • Amazon EC2 Nat Instance - goes out but not back in

    - by nocode
    I've followed Amazon's steps and list what I've done. I've created 6 subnets (4 private SN1: 10.50.1.0/24, SN2: 10.50.2.0/24, SN3: 10.50.3.0/24, SN4: 10.50.4.0/24) and 2 public (SN5: 10.50.101.0/24 and SN6: 10.50.102.0/24) -I have a Bastion host and a NAT instance on SN5 and assigned EIP's to both. I created a test instance on SN1. edit: -NAT instance has source/destination check disabled -On the NAT instance, I had enabled the following commands to be bootstrapped: echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -s 10.0.0.0/16 -j MASQUERADE -In my VPC, the private subnets have their own route table and configured 0.0.0.0/0 to the NAT instance with 4 subnets being associated with the route table. I have a second route table for my public subnets and 0.0.0.0/16 is pointed towards the IGW (with the other 2 subnets associated with it). -For Security Groups, I have the NAT instance accepting all traffic on each of the 4 subnets and all OUTBOUND traffic is allowed. For my test server, I have allowed all outbound access and have allowed all traffic from the public subnet of the NAT host. I can ping internally with no issues. On my test instance, if I try to ping google.com, DNS resolves however I don't get a reply back. On my NAT instance, I run a tcpdump and can see the request being requested to google.com but it's not sending the reply back. My NAT host can ping and receive a reply from google. From the test host, when I ping the NAT instance, the tcpdump shows a request and receive. Is there something I'm missing? EDIT: I've figured it out - I had to save the iptable config and restart the service.

    Read the article

  • IP queue buffer

    - by summerbulb
    I seem to have an issue with IP queue. I have a linux machine that I am using to run some experiments. The linux machine is configured to be a router, having two NICs, connecting two other computers, and managing their network traffic. All incoming packages are captured, using iptables, and analyzed by a C application. The application analyzing the packets has a built-in delay, as part of the experiment. So I have one very fast computer sending packets through my linux-router and a (relatively) slow linux-router that analyses and deals with the packets, one by one. This situation leads to the fact that when I fire up a sender application on one of the computers connected to the linux-router, my IP queue on the linux-router gets filled up (almost) instantaneously. The IP queue's max length is currently set to 1024, and if it overflows, the packets are dropped. This is expected and i'm OK with it. But, (and this is where it gets interesting), every now and then I get the following error: "Failed to receive netlink message: No buffer space available" At start, I thought this was due to the IP queue overflow, but after some analysis i found that sometimes I get the error even if the IP queue buffer did not overflow, and sometime I DON'T get the message even though the buffer DID overflow. When I run > cat /proc/net/ip_queue, I get the following table (also used to monitor the IP queue overflow): Peer PID : 27389 Copy mode : 2 Copy range : 65535 Queue length : 0 Queue max. length : 1024 Queue dropped : 1166875 Netlink dropped : 2916 Looking at the last two values, Queue dropped seems to refer to packets that did not manage to get into the IP queue because the buffer was full. I can see this value rise as I bombard the linux-router. Netlink dropped ( as it's name implies :) ) seems to have to do with the error i'm getting. I did my best to search for material on this error, but wasn't able to find anything that seemed to point me in the required direction. Bottom line: Why am I getting this error and what can I do to avoid it?

    Read the article

  • Selenium Test Runner and variables problem

    - by quilovnic
    Hi, In my selenium test suite (html), I define a first test case to initialize variable called in the next test case. Sample : In first script : store|//div[@id="myfield"]|myvar In my second script : type|${myvar}|myvalue But when I start test runner (from maven), it returns an error telling that ${myvar} is not found The value contained in the stored var is not used. Any suggestion ? Thans a lot

    Read the article

  • How to get compatibility between C# and SQL2k8 AES Encryption?

    - by Victor Rodrigues
    I have an AES encryption being made on two columns: one of these columns is stored at a SQL Server 2000 database; the other is stored at a SQL Server 2008 database. As the first column's database (2000) doesn't have native functionality for encryption / decryption, we've decided to do the cryptography logic at application level, with .NET classes, for both. But as the second column's database (2008) allow this kind of functionality, we'd like to make the data migration using the database functions to be faster, since the data migration in SQL 2k is much smaller than this second and it will last more than 50 hours because of being made at application level. My problem started at this point: using the same key, I didn't achieve the same result when encrypting a value, neither the same result size. Below we have the full logic in both sides.. Of course I'm not showing the key, but everything else is the same: private byte[] RijndaelEncrypt(byte[] clearData, byte[] Key) { var memoryStream = new MemoryStream(); Rijndael algorithm = Rijndael.Create(); algorithm.Key = Key; algorithm.IV = InitializationVector; var criptoStream = new CryptoStream(memoryStream, algorithm.CreateEncryptor(), CryptoStreamMode.Write); criptoStream.Write(clearData, 0, clearData.Length); criptoStream.Close(); byte[] encryptedData = memoryStream.ToArray(); return encryptedData; } private byte[] RijndaelDecrypt(byte[] cipherData, byte[] Key) { var memoryStream = new MemoryStream(); Rijndael algorithm = Rijndael.Create(); algorithm.Key = Key; algorithm.IV = InitializationVector; var criptoStream = new CryptoStream(memoryStream, algorithm.CreateDecryptor(), CryptoStreamMode.Write); criptoStream.Write(cipherData, 0, cipherData.Length); criptoStream.Close(); byte[] decryptedData = memoryStream.ToArray(); return decryptedData; } This is the SQL Code sample: open symmetric key columnKey decryption by password = N'{pwd!!i_ll_not_show_it_here}' declare @enc varchar(max) set @enc = dbo.VarBinarytoBase64(EncryptByKey(Key_GUID('columnKey'), 'blablabla')) select LEN(@enc), @enc This varbinaryToBase64 is a tested sql function we use to convert varbinary to the same format we use to store strings in the .net application. The result in C# is: eg0wgTeR3noWYgvdmpzTKijkdtTsdvnvKzh+uhyN3Lo= The same result in SQL2k8 is: AI0zI7D77EmqgTQrdgMBHAEAAACyACXb+P3HvctA0yBduAuwPS4Ah3AB4Dbdj2KBGC1Dk4b8GEbtXs5fINzvusp8FRBknF15Br2xI1CqP0Qb/M4w I just didn't get yet what I'm doing wrong. Do you have any ideas? EDIT: One point I think is crucial: I have one Initialization Vector at my C# code, 16 bytes. This IV is not set at SQL symmetric key, could I do this? But even not filling the IV in C#, I get very different results, both in content and length.

    Read the article

  • Specifying schema for temporary tables

    - by Tom Hunter
    I'm used to seeing temporary tables created with just the hash/number symbol, like this: CREATE TABLE #Test ( [Id] INT ) However, I've recently come across stored procedure code that specifies the schema name when creating temporary tables, for example: CREATE TABLE [dbo].[#Test] ( [Id] INT ) Is there any reason why you would want to do this? If you're only specifying the user's default schema, does it make any difference? Does this refer to the [dbo] schema in the local database or the tempdb database?

    Read the article

  • index 'enabled' fields good idea?

    - by sibidiba
    Content of a website is stored in a MySQL database. 99% of the content will be enabled, but some (users, posts etc.) will be disabled. Most of the queries end as WHERE (...) AND enabled Is it a good idea to create an index on the field 'enabled'?

    Read the article

  • Logout from an desktop application to change user in C#.net

    - by Sadequzzaman Monoh
    I have designed an desktop application using C#.net that has many users. Each USer has specific rights. The User logs into the system when the application first starts and the UserID number is stored and used throughout the app., but when they want to change user (UserID) they have to close the system down and start again. How would I go about creating a 'log out' - 'login' function that keeps the main form open but disabled allowing a new user to login?

    Read the article

  • xml file reading

    - by Dilse Naaz
    How to read an xml file in silverlight using webclient. I have one task, in this the xml file that i stored in my machine will read and the content od the xml file will display on a data grid. how it will be done?

    Read the article

  • User sessions with jquery and Ajax

    - by John
    I am using jquery to set a session, i have a php page which gets the values of the person logging. The value in the session array, is then used in another page where, it is stored in a hidden field for database entry.The problem is, the value is not set unless you refresh the page of which beats the purpose of AJAX and Jquery.Again,the session seems to be one session behind.How can I do this without page refresh/ reload?

    Read the article

  • The future of cloud computing? [closed]

    - by Vimvq1987
    As far as I know, cloud computing is growing rapidly. Amazon EC2, Google App Engine, Microsoft Windows Azure...But I can't imagine how cloud computing will change the world. Will cloud computing will play the main role in software industry? Will our data be stored at one place and then can be accessed from any where? Shall we need powerful PCs no more because everything will be processed at "cloud"? Thank you so much

    Read the article

  • Amazon S3 enforcing access control

    - by KandadaBoggu
    I have several PDF files stored in Amazon S3. Each file is associated with a user and only the file owner can access the file. I have enforced this in my download page. But the actual PDF link points to Amazon S3 url, which is accessible to anybody. How do I enforce the access-control rules for this url?(without making my server a proxy for all the PDF download links)

    Read the article

  • MVC2 - Dynamic Field Layout

    - by Rob
    I'm new to MVC and ADO.net Entity Framework. Instead of having to create an edit/display for each entity, I'd like to have the controller base class generate the view and validation code based off metadata stored in a table - something along those lines. I would imagine something like this has already been done, or there are good reasons for not doing it. Any insight or suggestions are appreciated.

    Read the article

  • How to add Ajax validation in a master where I have fields for other tables based on jqrelcopy?

    - by AravindRaj
    In a master page(_form) I want to save three fields drop down text box Option list and if a certain option from the option list is selected I need a JQrelcopy to appear and the user can add many values using JQrelcopy. The values, which are stored in JQrelcopy has to be saved in some other table. Now the problem is I want to give AJAX validation like the field cannot be blank, Only alphabetical characters are allowed for the fields which are inserted through JQrelcopy. I thought about creating a scenario, but I can't get it right

    Read the article

  • Scala: is it possible to override default case class constructor?

    - by adam77
    Just wondering if this is possible. What I would actually like to do is check and possibly modify one of the arguments before it is stored as a val. Alternatively, I could use an overload and make the default constructor private. In which case I would also like to make private the default factory constructor in the companion object, how would I do that? Many thanks. Adam

    Read the article

  • json for iphone, download image

    - by alecnash
    I am using json touch to get some data from my server database and everything works fine but I've got a little problem. I have some images stored as longblob in the db and I can't download them to my iphone app. All I get is a 'null' array. Does anyone know why this is happening???

    Read the article

  • Send comma delimited CSV through SFTP?

    - by JM4
    I am collecting registration information on my site and need to figure out how to pass all data stored in the MySQL DB (or just portions of it) as a comma delimited CSV file through an SFTP so our partners can access the information. The pages are built using PHP. I literally have no idea how to do this and am hoping somebody has experience doing so. Thanks ahead of time!

    Read the article

  • How do I create a DSN for ODBC in Linux?

    - by deadprogrammer
    I am digging around in a Linux application that supposedly uses DSNs to connet to SQL Server. The connection stopped working and I can't find the credentials that are being used (all I know is the DSN's name). I am familiar with DSNs in Windows, but how are they created and where are they stored in Linux?

    Read the article

  • Choosing the MVC view engine

    - by leonard
    I want to allow the end-users of my web application to modify views (via web based back office), stored in the database. The desired view engine is expected to be code-injection safe, meaning that the end-user will be limited to the absolute minimum number of expressions available, no server code inserts are allowed. Is any suitable view engine available to download?

    Read the article

  • Zend_Search_Lucene range query error

    - by Maurice
    I have set up each document with a date field. (keyword) Values stored in it are in this format; 20100511 Each time I try to perform a ranged query, I get the following error: date:[10000000 TO 20000000] At least one range query boundary term must be non-empty term Anyone got a clue?

    Read the article

  • EF4 Import/Lookup thousands of records - my performance stinks!

    - by Dennis Ward
    I'm trying to setup something for a movie store website (using ASP.NET, EF4, SQL Server 2008), and in my scenario, I want to allow a "Member" store to import their catalog of movies stored in a text file containing ActorName, MovieTitle, and CatalogNumber as follows: Actor, Movie, CatalogNumber John Wayne, True Grit, 4577-12 (repeated for each record) This data will be used to lookup an actor and movie, and create a "MemberMovie" record, and my import speed is terrible if I import more than 100 or so records using these tables: Actor Table: Fields = {ID, Name, etc.} Movie Table: Fields = {ID, Title, ActorID, etc.} MemberMovie Table: Fields = {ID, CatalogNumber, MovieID, etc.} My methodology to import data into the MemberMovie table from a text file is as follows (after the file has been uploaded successfully): Create a context. For each line in the file, lookup the artist in the Actor table. For each Movie in the Artist table, lookup the matching title. If a matching Movie is found, add a new MemberMovie record to the context and call ctx.SaveChanges(). The performance of my implementation is terrible. My expectation is that this can be done with thousands of records in a few seconds (after the file has been uploaded), and I've got something that times out the browser. My question is this: What is the best approach for performing bulk lookups/inserts like this? Should I call SaveChanges only once rather than for each newly created MemberMovie? Would it be better to implement this using something like a stored procedure? A snippet of my loop is roughly this (edited for brevity): while ((fline = file.ReadLine()) != null) { string [] token = fline.Split(separator); string Actor = token[0]; string Movie = token[1]; string CatNumber = token[2]; Actor found_actor = ctx.Actors.Where(a => a.Name.Equals(actor)).FirstOrDefault(); if (found_actor == null) continue; Movie found_movie = found_actor.Movies.Where( s => s.Title.Equals(title, StringComparison.CurrentCultureIgnoreCase)).FirstOrDefault(); if (found_movie == null) continue; ctx.MemberMovies.AddObject(new MemberMovie() { MemberProfileID = profile_id, CatalogNumber = CatNumber, Movie = found_movie }); try { ctx.SaveChanges(); } catch { } } Any help is appreciated! Thanks, Dennis

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >