Search Results

Search found 10691 results on 428 pages for 'batch insert'.

Page 376/428 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • How do I rescue a small portion of data from a SQL Server database backup?

    - by Greg
    I have a live database that had some data deleted from it and I need that data back. I have a very recent copy of that database that has already been restored on another machine. Unrelated changes have been made to the live database since the backup, so I do not want to wipe out the live database with a full restore. The data I need is small - just a dozen rows - but those dozen rows each have a couple rows from other tables with foreign keys to it, and those couple rows have god knows how many rows with foreign keys pointing to them, so it would be complicated to restore by hand. Ideally I'd be able to tell the backup copy of the database to select the dozen rows I need, and the transitive closure of everything that they depend on, and everything that depends on them, and export just that data, which I can then import into the live database without touching anything else. What's the best approach to take here? Thanks. Everyone has mentioned sp_generate_inserts. When using this, how do you prevent Identity columns from messing everything up? Do you just turn IDENTITY INSERT on?

    Read the article

  • Appending Comment Number Anchors to Comments

    - by John
    Hello, I am using a PHP file called comments.php that has a query that enters values into a mySQL table called "comment." As the query does this, it auto-generates a field called "commentid", which is set to auto_increment in MySQL. The file also contains a loop what echoes out all comments for a given submission. It all works fine and dandy, but I want to simultaneously pull this "commentid" and turn it into a hashtag / anchor that when appended to the end of the URL makes that comment at the top of the user's browser. Someone said on another question that in order to do this one thing I should do is create an anchor on the row where the comment is being printed out. How can I do this? Thanks in advance, John The query that inserts comments into the MySQL table "comment": $query = sprintf("INSERT INTO comment VALUES (NULL, %d, %d, '%s', NULL)", $uid, $subid, $comment); mysql_query($query) or die(mysql_error()); The fields in the table "comment": commentid loginid submissionid comment datecommented The row in a loop where the comments are echoed out: echo '<td rowspan="3" class="commentname1">'.stripslashes($row["comment"]).'</td>';

    Read the article

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

  • Send Special Keys to Gtk.VteTerminal

    - by Ubersoldat
    Hi I have this OSS Project called Monocaffe connections manager which uses the Gtk.VteTerminal widget from PyGTK. A nice feature is that it allows the users to send commands to different servers' consoles (cluster mode) using a Gtk.TextView for the input. The way I send key strokes to each Gtk.VteTerminal is by using the feed_child method. For common keys there's no problem: I simply feed what the TextView receives to all the terminals, but when doing so with special keys I get into a little trouble. For "Return" I catch the event and feed the terminal a '\n'. For back-space is the same, catch the event and feed a '\b'. def cluster_backspace(self, widget): return self.cluster_send_key('\b') The problem comes with other keys like Tab, Arrows, Esc which I don't know how to feed as str to the terminal to recognize them. In the case of Esc is a real pain, because the users can edit the same file on different servers using vi, but cannot escape insert mode. Anyway, I'm not looking for a complete solution, just ideas since I've ran out of them. Thanks.

    Read the article

  • Telerik chart not loading correctly in new window (ajax issue?)

    - by Phillip Schmidt
    I have a page which contains user controls with Telerik Charts (grids also, but they work fine). From this page, the user can click on a button to be redirected to a "Printer-Friendly Version" type page, which opens a new window via javascript and goes through a slightly different view (for formatting and stuff), but the telerik code is all the same. The problem is, my Chart displays just fine in the original window, but the new window displays basically an empty chart with no data. This bug is only present in IE, and only applies to Charts. Grids work fine, for whatever reason. I'm thinking this is due to differences in script caching between browsers -- correct me if I'm wrong, I'm semi-new to client-directed web development. Anyway I read somewhere that Telerik has issues with loading data and/or js files when loaded via ajax, so maybe that's the problem? If so, how could I get around this? And if not, any ideas on what could be causing this issue? It's causing me a great deal of frustration, since a print preview page seems like it should be the easiest of jobs. Edit: The charts are being rendered as html (if somebody can explain how to render them as images, that would be awesome). And dev tools shows basically the same thing between chrome and IE. Whenever my web service goes back up ill WinMerge them and look for any peculiarities/differences between them. In the mean time, though, the "render as an image" concept sounds promising. That way I could just save the image from the first page, and insert it right into the print preview page, right?. And since it's a print-preview page, it's not going to need to be interactive or anything, so that'd work out nicely. Another (important) Edit: These are probably the culprit... And here is a little more detail on that: And here is a side-by-side of it working(in chrome) and not working (in IE):

    Read the article

  • Android Camera intent creating two files

    - by Kyle Ramstad
    I am making a program that takes a picture and then shows it's thumbnail. When using the emulator all goes well and the discard button deletes the photo. But on a real device the camera intent saves the image at the imageUri variable and a second one that is named like if I had just opened up the camera and took a picture by itself. private static final int CAMERA_PIC_REQUEST = 1337; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.camera); //start camera values = new ContentValues(); values.put(MediaStore.Images.Media.TITLE, "New Picture"); values.put(MediaStore.Images.Media.DESCRIPTION,"From your Camera"); imageUri = getContentResolver().insert(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, values); image = (ImageView) findViewById(R.id.ImageView01); Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); intent.putExtra(MediaStore.EXTRA_OUTPUT, imageUri); startActivityForResult(intent, CAMERA_PIC_REQUEST); //save the image buttons Button save = (Button) findViewById(R.id.Button01); Button close = (Button) findViewById(R.id.Button02); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == CAMERA_PIC_REQUEST && resultCode == RESULT_OK) { try{ thumbnail = MediaStore.Images.Media.getBitmap(getContentResolver(), imageUri); image.setImageBitmap(thumbnail); } catch(Exception e){ e.printStackTrace(); } } else{ finish(); } } public void myClickHandler(View view) { switch (view.getId()) { case R.id.Button01: finish(); break; case R.id.Button02: dicard(); } } private void dicard(){ getContentResolver().delete(imageUri, null, null); finish(); }

    Read the article

  • Performance Comparison of Shell Scripts vs high level interpreted langs (C#/Java/etc.)

    - by dferraro
    Hi all, First - This is not meant to be a 'which is better, ignorant nonionic war thread'... But rather, I generally need help in making an architecture decision / argument to put forward to my boss. Skipping the details - I simply just would love to know and find the results of anyone who has done some performance comparisons of Shell vs [Insert General Purpose Programming Language (interpreted) here), such as C# or Java... Surprisingly, I have spent some time on Google on searching here to not find any of this data. Has anyone ever done these comparisons, in different use-cases; hitting a database like in a XYX # of loops doing different types of SQL (Oracle pref, but MSSQL would do) queries such as any of the CRUD ops - and also not hitting database and just regular 50k loop type comparison doing different types of calculations, and things of that nature? In particular - for right now, I need to a comparison of hitting an Oracle DB from a shell script vs, lets say C# (again, any GPPL thats interpreted would be fine, even the higher level ones like Python). But I also need to know about standard programming calculations / instructions/etc... Before you ask 'why not just write a quick test yourself? The answer is: I've been a Windows developer my whole life/career and have very limited knowledge of Shell scripting - not to mention *nix as a whole.... So asking the question on here from the more experienced guys would be grealty beneficial, not to mention time saving as we are in near perputual deadline crunch as it is ;). Thanks so much in advance,

    Read the article

  • Use of unassigned local variable 'xxx'

    - by Tomislav
    I'm writing a database importer from our competitors to ours database:) I have a code generator which create Methods form import to our database like public void Test_Import_Customer_1() // variables string conn; string sqlSelect; string sqlInsert; int extID; string name; string name2; DateTime date_inserted; sqlSelect="select id,name,date_inserted from table_competitors_1"; oledbreader reader = new GetOledbRader(sqlString,conn); while (reader.read()) { name=left((string)myreader["name"],50); //limitation of my field date_inserted=myreader["date_inserted"]; sqlInsert=string.Format("insert into table(name,name2,date_inserted)values '{0}', '{1}', {2})",name,name2,date_inserted); //here is the problem name2 "Use of unassigned local variable" ExecuteSQL(sqlInsert) } As different companies database has different fields i can not set value to each variable and there is a big number of tables to go one variable to next. like sqlSelect_Company_1 = "select name,date_inserted from table_1"; sqlSelect_Company_2 = "select name,name2 from table_2"; is there a way to override the typing of each variable one by one with default values?

    Read the article

  • inserting unique date into txt document

    - by durian
    I'm trying this script to insert only a unique date into a text file, but it isn't working properly: $log_file_name = "logfile.txt"; $log_file_path = "log_files/$id/$log_file_name"; if(file_exists($log_file_path)){ $not = "not"; $todaydate = date('d,m,Y'); $today = "$todaydate;"; $strlength = strlen($today); $file_contents = file_get_contents($log_file_path); $file_contents_arry = explode(";",$file_contents); if(!in_array($todaytodaydate,$file_contents_arry)){ $append = fopen($log_file_path, 'a'); $write = fwrite($append,$today); //writes our string to our file. $close = fclose($append); //closes our file } else { $append = fopen($log_file_path, 'a'); $write = fwrite($append,$not); //writes our string to our file. $close = fclose($append); //closes our file } } else{ mkdir("log_files/$id", 0700); $todaydate = date('d,m,Y'); $today = "$todaydate;"; $strlength = strlen($today); $create = fopen($log_file_path, "w"); $write = fwrite($create, $today, $strlength); //writes our string to our file. $close = fclose($create); //closes our file } The problem is with the if else statement where it should be written if it's already in the array.

    Read the article

  • How can I exclude words with apostrophes when reading into a table of strings?

    - by rearden
    ifstream fin; string temp; fin.open("engldict.txt"); if(fin.is_open()) { bool apos = false; while(!fin.eof()) { getline(fin, temp, '\n'); if(temp.length() > 2 && temp.length() < 7) { for(unsigned int i = 0; i < temp.length(); i++) { if(temp.c_str()[i] == '\'') apos = true; } if(!apos) dictionary.insert(temp); } } } This code gives me a runtime error: Unhandled exception at 0x00A50606 in Word Jumble.exe: 0xC0000005: Access violation reading location 0x00000014. and throws me a break point at: size_type size() const _NOEXCEPT { // return length of sequence return (this->_Mysize); } within the xstring header. This exception is thrown no matter what character I use, so long as it is present within the words I am reading in. I am aware that it is probably a super simple fix, but I just really need another set of eyes to see it. Thanks in advance.

    Read the article

  • Why am I getting a EXC_BAD_ACCESS in a NSTimer selector?

    - by AngeDeLaMort
    I've got quite a weird problem. To make it short, i'll write some pseudo-code: init: create a dictionary and insert n elements. create a "repeat timer" and add it to the currentRunLoop using the timerRefresh selector. timerRefresh: using a list of keys, find the items in the dictionary if the item exists -> call a function So, for an unknown reason, I get an EXC_BAD_ACCESS when I do: [item function]; But I traced the address I got from the dictionary items and it's ok. The ref count of the items in the dictionary is still 1. The {release, dealloc} of the items in the dictionary aren't called. Everything seems fine. Also, to make it worst, it works for some items. So, I'm wondering if there is a threading problem? or something else obscure? The callstack is quite simple: #0 0x93e0604b in objc_msgSend_fpret #1 0x00f3e6b0 in ?? #2 0x0001cfca in -[myObject functionm:] at myObject.m:000 #3 0x305355cd in __NSFireTimer #4 0x302454a0 in CFRunLoopRunSpecific #5 0x30244628 in CFRunLoopRunInMode #6 0x32044c31 in GSEventRunModal #7 0x32044cf6 in GSEventRun #8 0x309021ee in UIApplicationMain #9 0x000027e0 in main at main.m:14 So, any suggestion where to look would be appreciated.

    Read the article

  • Pushing elements into array as copy

    - by koko
    In prototypejs, why does the following code remove the matching divs from the #test div? What confuses me is that this happens when they are being inserted in the #droparea, and not when they are being pushed in the array. <div id="test"> <div class="foo" id="22.1234"> 1 </div> <div class="foo" id="22.1235"> 2 </div> <div class="foo" id="53.2345"> 3 </div> <div class="foo" id="53.2346"> 4 </div> </div> <div id="droparea"> </div> js var elArray = []; var els = $('test').select('.foo'); els.each(function(x){ if(x.id.split('.')[0] == 22){ elArray.push(x); } }); elArray.each(function(y){ $('droparea').insert({ bottom: y }); });

    Read the article

  • Managing Many to Many relationships in asp.net Wizard Control

    - by Luis
    Say I have this entity with a lot of attributes. In the input form I have decided to implement a wizard control so I can collect information about this entity in several steps. The problem is that I need to collect information that has been modeled has many to many relationships. I am planning to use a telerik gridview to manage this (add/edit/delete), the problem is where do I store that data since the entity in a insert form is not created on the database yet. OK so I can store all that info in temporary lists residing in the viewstate, waiting for the final submit where I dump all that in the DB, but one of the steps I am collecting files...now storing files in the viewstate is out of the question, same as as storing them in the session... I have been thinking of implementing in a way that the user has to submit some info first (say first 3 steps), commit the data to the database creating the parent entity and then start inserting all the childs entities...but this will get weird as it's confusing since on the first steps you not saving the data to the DB and on the next ones you are commiting directly... Anyone has any thoughts on this? Thanks

    Read the article

  • Howw to add new value with generic Repository if there are foreign keys (EF-4)?

    - by Phsika
    i try to write a kind of generic repository to add method. Everything is ok to add but I have table which is related with two tables with FOREIGN KEY.But Not working because of foreign key public class DomainRepository<TModel> : IDomainRepository<TModel> where TModel : class { #region IDomainRepository<T> Members private ObjectContext _context; private IObjectSet<TModel> _objectSet; public DomainRepository() { } public DomainRepository(ObjectContext context) { _context = context; _objectSet = _context.CreateObjectSet<TModel>(); } //do something..... public TModel Add<TModel>(TModel entity) where TModel : IEntityWithKey { EntityKey key; object originalItem; key = _context.CreateEntityKey(entity.GetType().Name, entity); _context.AddObject(key.EntitySetName, entity); _context.SaveChanges(); return entity; } //do something..... } Calling REPOSITORY: //insert-update-delete public partial class AddtoTables { public table3 Add(int TaskId, int RefAircraftsId) { using (DomainRepository<table3> repTask = new DomainRepository<table3>(new TaskEntities())) { return repTask.Add<table3>(new table3() { TaskId = TaskId, TaskRefAircraftsID = RefAircraftsId }); } } } How to add a new value if this table includes foreign key relation

    Read the article

  • NHibernate.MappingException on table insertion.

    - by Suja
    The table structure is : The controller action to insert a row to table is public bool CreateInstnParts(string data) { IDictionary myInstnParts = DeserializeData(data); try { HSInstructionPart objInstnPartBO = new HSInstructionPart(); using (ISession session = Document.OpenSession()) { using (ITransaction transaction = session.BeginTransaction()) { objInstnPartBO.DocumentId = Convert.ToInt32(myInstnParts["documentId"]); objInstnPartBO.InstructionId = Convert.ToInt32(myInstnParts["instructionId"]); objInstnPartBO.PartListId = Convert.ToInt32(myInstnParts["part"]); objInstnPartBO.PartQuantity = Convert.ToInt32(myInstnParts["quantity"]); objInstnPartBO.IncPick = Convert.ToBoolean(myInstnParts["incpick"]); objInstnPartBO.IsTracked = Convert.ToBoolean(myInstnParts["istracked"]); objInstnPartBO.UpdatedBy = User.Identity.Name; objInstnPartBO.UpdatedAt = DateTime.Now; session.Save(objInstnPartBO); transaction.Commit(); } return true; } } catch (Exception ex) { Console.Write(ex.Message); return false; } } This is throwing an exception NHibernate.MappingException was caught Message="No persister for: Hexsolve.Data.BusinessObjects.HSInstructionPart" Source="NHibernate" StackTrace: at NHibernate.Impl.SessionFactoryImpl.GetEntityPersister(String entityName) at NHibernate.Impl.SessionImpl.GetEntityPersister(String entityName, Object obj) at NHibernate.Event.Default.AbstractSaveEventListener.SaveWithGeneratedId(Object entity, String entityName, Object anything, IEventSource source, Boolean requiresImmediateIdAccess) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.SaveWithGeneratedOrRequestedId(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveEventListener.SaveWithGeneratedOrRequestedId(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.EntityIsTransient(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveEventListener.PerformSaveOrUpdate(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.OnSaveOrUpdate(SaveOrUpdateEvent event) at NHibernate.Impl.SessionImpl.FireSave(SaveOrUpdateEvent event) at NHibernate.Impl.SessionImpl.Save(Object obj) at HexsolveMVC.Controllers.InstructionController.CreateInstnParts(String data) in F:\Project\HexsolveMVC\Controllers\InstructionController.cs:line 1342 InnerException: Can anyone help me solve this??

    Read the article

  • In SQL can I return a tables with a varying number of columns

    - by Matt
    I have a somewhat more complicated scenario, but I think it should be possible. I have a large SPROC whose result is a set of characteristics for a set of persons. So the Table would look something like this: Property | Client1 Client 2 Client3 ----------------------------------------------------------- Sex | M F M Age | 67 56 67 Income | Low Mid Low It's built using cursors, iterating over different datasets. The problem I am facing is that there is a varying number of Clients and Properties, so an equally valid result over different input sets might be: Property | Client1 Client 2 ------------------------------------------- Sex | M F Age | 67 56 Weight | 122 122 The different number of properties is easy, those are just extra rows. My problem is that I need to declare a temporary table with a varying number of columns. There could be 2 clients or 100. Every client in guaranteed to have every property ultimately listed. What SQL structure would statisfy this and how can I declare it and insert things into it? I can't just flip the columns and rows either because there is a variable number of each.

    Read the article

  • Are there good reasons not to use an ORM?

    - by hangy
    During my apprenticeship, I have used NHibernate for some smaller projects which I mostly coded and designed on my own. Now, before starting some bigger project, the discussion arose how to design data access and whether or not to use an ORM layer. As I am still in my apprenticeship and still consider myself a beginner in enterprise programming, I did not really try to push in my opinion, which is that using an object relational mapper to the database can ease development quite a lot. The other coders in the development team are much more experienced than me, so I think I will just do what they say. :-) However, I do not completely understand two of the main reasons for not using NHibernate or a similar project: One can just build one’s own data access objects with SQL queries and copy those queries out of Microsoft SQL Server Management Studio. Debugging an ORM can be hard. So, of course I could just build my data access layer with a lot of SELECTs etc, but here I miss the advantage of automatic joins, lazy-loading proxy classes and a lower maintenance effort if a table gets a new column or a column gets renamed. (Updating numerous SELECT, INSERT and UPDATE queries vs. updating the mapping config and possibly refactoring the business classes and DTOs.) Also, using NHibernate you can run into unforeseen problems if you do not know the framework very well. That could be, for example, trusting the Table.hbm.xml where you set a string’s length to be automatically validated. However, I can also imagine similar bugs in a “simple” SqlConnection query based data access layer. Finally, are those arguments mentioned above really a good reason not to utilise an ORM for a non-trivial database based enterprise application? Are there probably other arguments they/I might have missed? (I should probably add that I think this is like the first “big” .NET/C# based application which will require teamwork. Good practices, which are seen as pretty normal on Stack Overflow, such as unit testing or continuous integration, are non-existing here up to now.)

    Read the article

  • How to Inserting message into View that depends on session value. ASP.NET MVC. Best practice

    - by Andrew Florko
    User have to populate multistep questionnaire web-forms and step messages depend on the option chosen by user at the very beginning. Messages are stored in web.config file. I use asp.net mvc project, strong typed views and keep business logic separated from controller in static class. I don't want to make business logic dependency on web.config. Well, I have to insert message into view that depends on session value. There are at least 2 options how to implement this: View model has property that is populated in controller/businessLogic and rendered in view like <%: Model.HelpMessage1 %>. I have to pass web.config values from controller to businessLogic that makes business logic methods signature too complex. I don't want to make configuration source abstract (in order to let business logic read configuration values from its methods directly) also. Create static helper class that is called from view like <%: ViewHelper.HelpMessage1(Model.Option1) %>. But in this case logic what to show seems to be separated into two classes: business logic & viewHelper. What will you suggest? Thank you in advance!

    Read the article

  • Pushing a vector into an vector

    - by Sunil
    I have a 2d vector typedef vector <double> record_t; typedef vector <record_t> data_t; data_t data; So my 2d vector is data here. It has elements like say, 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4 5 5 5 5 5 Now I want to insert these elements into another 2d vector std::vector< vector<double> > window; So what I did was to create an iterator for traversing through the rows of data and pushing it into window like std::vector< std::vector<double> >::iterator data_it; for (data_it = data.begin() ; data_it != data.end() ; ++data_it){ window.push_back ( *data_it ); // Do something else } Can anybody tell me where I'm wrong or suggest a way to do this ? BTW I want to push it just element by element because I want to be able to do something else inside the loop too. i.e. I want to check for a condition and increment the value of the iterator inside. for example, if a condition satisfies then I'll do data_it+=3 or something like that inside the loop. Thanks P.S. I asked this question last night and didn't get any response and that's why I'm posting it again.

    Read the article

  • N Tiers with SubSonic 3, Dirty Columns collection is alwayes empty on update

    - by Adel
    Hello guys here is what i am doing, and not working for me. I have a DAL generated with SubSonic 3 ActiveRecord template, i have a service layer (business layer if you well) that have mixture of facade and some validation. say i have a method on the Service layer like public void UpdateClient(Client client); in my GUI i create a Client object fill it with some data with ID and pass it to the service method and this never worked, the dirty columns collection (that track which columns are altered in order to use more efficant update statment) is alwayes empty. if i tried to get the object from database inside my GUI then pass it to the service method it's not working either. the only scenario i find working is if i query the object from the database and call Update() on the same context all inside my GUI and this defeats the whole service layer i've created. however for insert and delete everything working fine, i wonder if this have to do something with object tracking but what i know is SubSonic don't do that. please advice. thanks. Adel.

    Read the article

  • How to create C++ istringstream from a char array with null(0) characters?

    - by Morpheus
    I have a char array which contains null characters at random locations. I tried to create an iStringStream using this array (encodedData_arr) as below, I use this iStringStream to insert binary data(imagedata of Iplimage) to a MySQL database blob field(using MySQL Connector/C++'s setBlob(istream *is) ) it only stores the characters upto the first null character. Is there a way to create an iStringStream using a char array with null characters? unsigned char *encodedData_arr = new unsigned char[data_vector_uchar->size()]; // Assign the data of vector<unsigned char> to the encodedData_arr for (int i = 0; i < vec_size; ++i) { cout<< data_vector_uchar->at(i)<< " : "<< encodedData_arr[i]<<endl; } // Here the content of the encodedData_arr is same as the data_vector_uchar // So char array is initializing fine. istream *is = new istringstream((char*)encodedData_arr, istringstream::in || istringstream::binary); prepStmt_insertImage->setBlob(1, is); // Here only part of the data is stored in the database blob field (Upto the first null character)

    Read the article

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

  • Saving temporary file name after uploading file with uplodify

    - by mIRU
    I have this script if (!empty($_FILES)) { $tempFile = $_FILES['Filedata']['tmp_name']; $targetPath = dirname(__FILE__).$_POST['folder'] . '/'; $pathinfoFile = pathinfo($_FILES['Filedata']['name']); $targetFile = str_replace('//','/',$targetPath) .uniqid().'.'.$pathinfoFile['extension']; move_uploaded_file($tempFile,$targetFile); } this script is from uplodify , modified for saving the file with unique name , i need after user will upload the file , this unique name and original name , to save temporary , when user will submit the form , this temporary variables i will insert in database , the problem is that in firebug console , i can not see all the actions of this script , and i can not understand the way how to fix it, I tried to save in $_SESSION but i have problems , is not saving , i found why is not working with $_SESSION , http://uploadify.com/forum/viewtopic.php?f=5&t=43 , i tried the solution from forum but without result , exist a more easy way to solve this problem ? , or what is better way to do it ? . Sorry for so silly question , i ran out of ideas . Thanks a lot for helping me , and sorry again for my English

    Read the article

  • Error with MySQL Query

    - by Ken
    Okay, I must be an idiot, because this is my 3rd question for today. Here's my code: date_default_timezone_set("America/Los_Angeles"); include("mainmenu.php"); $con = mysql_connect("localhost", "root", "********"); if(!$con){ die(mysql_error()); } $usrname = $_POST['usrname']; $fname = $_POST['fname']; $lname = $_POST['lname']; $password = $_POST['password']; $email = $_POST['email']; mysql_select_db("`users`, $con) or die(mysql_error()"); $query = ("INSERT INTO `users`.`data` (`id`, `usrname`, `fname`, `lname`, `email`, `password`) VALUES (NULL, '$usrname', '$fname', '$lname', '$email', 'password'))"); mysql_query('$query') or die(mysql_error()); mysql_close($con); echo("Thank you for registering!"); I always get the error returned as: "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '$query' at line 1. Help a newbie. I'm about to stab my monitor.

    Read the article

  • Most watched videos this week

    - by Jan Hancic
    I have a youtube like web-page where users upload&watch videos. I would like to add a "most watched videos this week" list of videos to my page. But this list should not contain just the videos that ware uploaded in the previous week, but all videos. I'm currently recording views in a column, so I have no information on when a video was watched. So now I'm searching for a solution to how to record this data. The first is the most obvious (and the correct one, as far as I know): have a separate table in which you insert a new line every time you want to record a new view (storing the ID of the video and the timestamp). I'm worried that I would quickly get huge amounts of data in this table, and queries using this table would be extremely slow (we get about 3 million views a month). The second solution isn't as flexible but is more easy on the database. I would add 7 columns to the "videos" table (one for each day of the week): views_monday, views_tuesday , views_wednesday, ... And increment the value in the correct column based on the day it is. And I would reset the current day's column to 0 at midnight. I could then easily get the most watched videos of the week by summing this 7 columns. What do you think, should I bother with the first solution or will the second one suffice for my case? If you have a better solution please share! Oh, I'm using MySQL.

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >