Search Results

Search found 10042 results on 402 pages for 'orient db'.

Page 343/402 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • Search book by title, and author

    - by Swoosh
    I got a table with columns: author firstname, author lastname, and booktitle Multiple users are inserting in the database, through an import, and I'd like to avoid duplicates. So I'm trying to do something like this: I have a record in the db: First Name: "Isaac" Last Name: "Assimov" Title: "I, Robot" If the user tries to add it again, it would be basically a non-split-text (would not be split up into author firstname, author lastname, and booktitle) So it would basically look like this: "Isaac Asimov - I Robot" or "Asimov, Isaac - I Robot" or "I Robot by Isaac Asimov" You see where I am getting at? (I cannot force the user to split up all the books into into author firstname, author lastname, and booktitle, and I don't even like the idea to force the user, because it's not too user-friendly) What is the best way (in SQL) to compare all this possible bookdata scenarios to what I have in the database, not to add the same book twice. I was thinking about a possibility of suggesting the user: "is THIS the book you are trying to add?" (imagine a list instead of the THIS word, just like on stackoverflow - ask question - Related Questions. I was thinking about soundex and maybe even the like operators, but so far i didn't get the results i was hoping.

    Read the article

  • Rails Heroku Migrate Unknown Error

    - by Ryan Max
    Hello. I am trying to get my app up and running on heroku. However once I go to migrate I get the following error: $ heroku rake db:migrate rake aborted! An error has occurred, this and all later migrations canceled: 530 5.7.0 Must issue a STARTTLS command first. bv42sm676794ibb.5 (See full trace by running task with --trace) (in /disk1/home/slugs/155328_f2d3c00_845e/mnt) == BortMigration: migrating ================================================= -- create_table(:sessions) -> 0.1366s -- add_index(:sessions, :session_id) -> 0.0759s -- add_index(:sessions, :updated_at) -> 0.0393s -- create_table(:open_id_authentication_associations, {:force=>true}) -> 0.0611s -- create_table(:open_id_authentication_nonces, {:force=>true}) -> 0.0298s -- create_table(:users) -> 0.0222s -- add_index(:users, :login, {:unique=>true}) -> 0.0068s -- create_table(:passwords) -> 0.0123s -- create_table(:roles) -> 0.0119s -- create_table(:roles_users, {:id=>false}) -> 0.0029s I'm not sure exactly what it means. Or really what it means at all. Could it have to do with my Bort installation? I did remove all the open-id stuff from it. But I never had any problems with my migrations locally. Additionally on Bort the Restful Authentication uses my gmail stmp to send confirmation emails...all the searches on google i do on STARTTLS have to do with stmp. Can someone point me in the right direction?

    Read the article

  • Saving complex aggregates using Repository Pattern

    - by Kevin Lawrence
    We have a complex aggregate (sensitive names obfuscated for confidentiality reasons). The root, R, is composed of collections of Ms, As, Cs, Ss. Ms have collections of other low-level details. etc etc R really is an aggregate (no fair suggesting we split it!) We use lazy loading to retrieve the details. No problem there. But we are struggling a little with how to save such a complex aggregate. From the caller's point of view: r = repository.find(id); r.Ps.add(factory.createP()); r.Cs[5].updateX(123); r.Ms.removeAt(5); repository.save(r); Our competing solutions are: Dirty flags Each entity in the aggregate in the aggregate has a dirty flag. The save() method in the repository walks the tree looking for dirty objects and saves them. Deletes and adds are a little trickier - especially with lazy-loading - but doable. Event listener accumulates changes. Repository subscribes a listener to changes and accumulates events. When save is called, the repository grabs all the change events and writes them to the DB. Give up on repository pattern. Implement overloaded save methods to save the parts of the aggregate separately. The original example would become: r = repository.find(id); r.Ps.add(factory.createP()); r.Cs[5].updateX(123); r.Ms.removeAt(5); repository.save(r.Ps); repository.save(r.Cs); repository.save(r.Ms); (or worse) Advice please! What should we do?

    Read the article

  • Executing sequential stored procedures; works in query analyzer, doesn't in my .NET application

    - by evanmortland
    Hello, I have an audit record table that I am writing to. I am connecting to MyDb, which has a stored procedure called 'CreateAudit', which is a passthrough stored procedure to another database on the same machine called MyOther DB with a stored procedure called 'CreatedAudit' as well. In other words in MyDB I have CreateAudit, which does the following EXEC dbo.MyOtherDB.CreateAudit. I call the MyDb CreateAudit stored procedure from my application, using subsonic as the DAL. The first time I call it, I call it with the following (pseudocode): Result = CreateAudit(recordId, "Opened") One line after that, I call: Result2 = CreateAudit(recordId, "Closed") In my second stored procedure it is supposed to mark the record that was created by the CreateAudit(recordId, "Opened") with a status of closed. It works great if I run them independently of one another, but when they run in sequence in the application, the record is not marked as "Closed". When I run SQL profiler I see that both queries ran, and if I copy the queries out and run them from query analyzer the record gets marked as closed 100% of the time! When I run it from the application, about once every 20 times or so, the record is successfully marked closed - the other 19 times nothing happens, but I do not get an error! Is it possible for the .NET app to skip over the ouput from the first stored procedure and start executing the second stored procedure before the record in the first is created? When I add a "WAITFOR DELAY '00:00:00:003'" to the top of my stored procedure, the record is also closed 100% of the time. My head is spinning, any ideas why this is happening! Thanks for any responses, very interested in hearing how this can happen.

    Read the article

  • Auditing in Entity Framework.

    - by Gabriel Susai
    After going through Entity Framework I have a couple of questions on implementing auditing in Entity Framework. I want to store each column values that is created or updated to a different audit table. Rightnow I am calling SaveChanges(false) to save the records in the DB(still the changes in context is not reset). Then get the added | modified records and loop through the GetObjectStateEntries. But don't know how to get the values of the columns where their values are filled by stored proc. ie, createdate, modifieddate etc. Below is the sample code I am working on it. //Get the changed entires( ie, records) IEnumerable<ObjectStateEntry> changes = context.ObjectStateManager.GetObjectStateEntries(EntityState.Modified); //Iterate each ObjectStateEntry( for each record in the update/modified collection) foreach (ObjectStateEntry entry in changes) { //Iterate the columns in each record and get thier old and new value respectively foreach (var columnName in entry.GetModifiedProperties()) { string oldValue = entry.OriginalValues[columnName].ToString(); string newValue = entry.CurrentValues[columnName].ToString(); //Do Some Auditing by sending entityname, columnname, oldvalue, newvalue } } changes = context.ObjectStateManager.GetObjectStateEntries(EntityState.Added); foreach (ObjectStateEntry entry in changes) { if (entry.IsRelationship) continue; var columnNames = (from p in entry.EntitySet.ElementType.Members select p.Name).ToList(); foreach (var columnName in columnNames) { string newValue = entry.CurrentValues[columnName].ToString(); //Do Some Auditing by sending entityname, columnname, value } }

    Read the article

  • Rails - handling global site settings

    - by egarcia
    I'm developing a new rails application which is supposed to be installed several times in order to implement several sites. There are some things, like the "Site Title" or the "Default Number of Items per Page" that clearly belong to a "global settings" table / config file. I've made a list of the things I think I'll need: ActiveRecord model that is capable of: Storing different kinds of data. I suppose this would be accomplished encoding the values on a string on the db, probably with a "type" field. Indexing settings by name Validations based on a "type" attribute (i.e. don't accept invalid dates on "date" settings) Validations based on a allows_nil property. A controller that allows me to change settings via views. I'm pretty sure I could implement this myself, but I'm not willing to reinvent the wheel. I've done some searching, but I could only find rails-settings, which doesn't really serve me: I need a proper model & controller so I can use declarative-authorization, and it does not provide any controller or view facilities. Is there a gem or plugin out there that implements what I want, or any library I should look at? Thanks a lot.

    Read the article

  • Entity Framework - refresh objects from database

    - by Nebo
    I'm having trouble with refreshing objects in my database. I have an two PC's and two applications. On the first PC, there's an application which communicates with my database and adds some data to Measurements table. On my other PC, there's an application which retrives the latest Measurement under a timer, so it should retrive measurements added by the application on my first PC too. The problem is it doesn't. On my application start, it caches all the data from database and never get new data added. I use Refresh() method which works well when I change any of the cached data, but it doesn't refresh newly added data. Here is my method which should update the data: public static Entities myEntities = new Entities(); public static Measurement GetLastMeasurement(int conditionId) { myEntities.Refresh(RefreshMode.StoreWins, myEntities.Measurements); return (from measurement in myEntities.Measurements where measurement.ConditionId == conditionId select measurement).OrderByDescending(cd => cd.Timestamp).First(); } P.S. Applications have different connection strings in app.config (different accounts for the same DB).

    Read the article

  • GET and XMLHttpRequest

    - by Neethusha
    i have an XMLHttpRequest.The request passes a parameter to my php server code in /var/www. But i cannot seem to be able to extract the parameter back at the server side. below i have pasted both the codes: javascript: function getUsers(u) { alert(u);//here u is 'http://start.ubuntu.com/9.10' xmlhttp=new XMLHttpRequest(); var url="http://localhost/servercode.php"+"?q="+u; xmlhttp.onreadystatechange= useHttpResponse; xmlhttp.open("GET",url,true); xmlhttp.send(null); } function useHttpResponse() { if (xmlhttp.readyState==4 ) { var response = eval('('+xmlhttp.responseText+')'); for(i=0;i<response.Users.length;i++) alert(response.Users[i].UserId); } } servercode.php: <?php $q=$_GET["q"]; //$q="http://start.ubuntu.com/9.10"; $con=mysql_connect("localhost","root","blaze"); if(!$con) {die('could not connect to database'.mysql.error()); } mysql_select_db("BLAZE",$con) or die("No such Db"); $result=mysql_query("SELECT * FROM USERURL WHERE URL='$q'"); if($result == null) echo 'nobody online'; else { header('Content-type: text/html'); echo "{\"Users\":["; while($row=mysql_fetch_array($result)) { echo '{"UserId":"'.$row[UsrID].'"},'; } echo "]}"; } mysql_close($con); ?> this is not giving the required result...although the commented statement , where the variable is assigned explicitly the value of the argument works...it alerts me the required output...but somehow the GET method's parameter is not reaching my php or thats how i think it is....pls help....

    Read the article

  • C# reference collection for storing reference types

    - by ivo s
    I like to implement a collection (something like List<T>) which would hold all my objects that I have created in the entire life span of my application as if its an array of pointers in C++. The idea is that when my process starts I can use a central factory to create all objects and then periodically validate/invalidate their state. Basically I want to make sure that my process only deals with valid instances and I don't re-fetch information I already fetched from the database. So all my objects will basically be in one place - my collection. A cool thing I can do with this is avoid database calls to get data from the database if I already got it (even if I updated it after retrieval its still up-to-date if of course some other process didn't update it but that a different concern). I don't want to be calling new Customer("James Thomas"); again if I initted James Thomas already sometime in the past. Currently I will end up with multiple copies of the same object across the appdomain - some out of sync other in sync and even though I deal with this using timestamp field on the MSSQL server I'd like to keep only one copy per customer in my appdomain (if possible process would be better). I can't use regular collections like List or ArrayList for example because I cannot pass parameters by their real local reference to the their existing Add() methods where I'm creating them using ref so that's not to good I think. So how can this be implemented/can it be implemented at all ? A 'linked list' type of class with all methods working with ref & out params is what I'm thinking now but it may get ugly pretty quickly. Is there another way to implement such collection like RefList<T>.Add(ref T obj)? So bottom line is: I don't want re-create an object if I've already created it before during the entire application life unless I decide to re-create it explicitly (maybe its out-of-date or something so I have to fetch it again from the db). Is there alternatives maybe ?

    Read the article

  • How would I structure the loop to go through inputs?

    - by dmanexe
    I am attempting to make a loop that will go through an array structured like this: $input[n][checked] $input[n][input] The 2nd input acts as a price multiplier, but doesn't have to exist (field can be blank). I don't think a foreach loop is right because I don't think it handles the inputs from the form in the correct dimensional array order to keep the information together. I have inputs on a form that look like this: <input type="checkbox" name="measure[<?php echo $item->id; ?>][checked]" value="<?php echo $item->id; ?>"> <input class="item_mult" type="text" name="measure[<?php echo $item->id; ?>][input]" /> I am attempting to make the loop go through and act as a multiplier on the input relative to its sibling field. (i.e. input[1][input] would be an integer that I want to retrieve later, grouped with input[1][checked]) <? $field = $this->input->post('measure',true); $totals = array(); foreach($field as $value): if ($value['input'] == TRUE) { $query = $this->db->get_where('items', array('id' => $value['input']))->row(); $totals[] = $query->price; ?> <p><?=$query->name?> - <?=money_format('%(#10n', $query->price)?></p> <?php } else { } endforeach; ?> And finally, the last code to array_sum and print the grand total: <? $grand_total = array_sum($totals); ?> <p><?=money_format('%(#10n', $grand_total)?></p> Eventually, I will need to store these records in a database, so I am sending complete item IDs through, etc.

    Read the article

  • Long text input from user and PDF generation

    - by Petteri Hietavirta
    I have built a web application that can be seen as an overcomplicated application form. There are bunch of text areas with a given character limit. After the form submission various things happen and one of them is PDF generation. The text is queried from the DB and inserted in the PDF template created in iReports. This works fine but the major pain is overflowing text. The maximum number of characters is set based on 'average' text. But sometimes people prefer to write with CAPS or add plenty of linefeeds to format their text. These then cause user's text to overflow the space given in PDF. Unfortunately the PDF document must look like a real application form so I cannot allow unlimited space. What kind of approaches you have used to tackle this? Clean/restrict user input? Calculate the space requirement of the text based on font metrics? Provide preview of the PDF? (too bad users are not allowed to change their input after submission...)

    Read the article

  • mysql to excel generation using php

    - by pmms
    <?php // DB Connection here mysql_connect("localhost","root",""); mysql_select_db("hitnrunf_db"); $select = "SELECT * FROM jos_users "; $export = mysql_query ( $select ) or die ( "Sql error : " . mysql_error( ) ); $fields = mysql_num_fields ( $export ); for ( $i = 0; $i < $fields; $i++ ) { $header .= mysql_field_name( $export , $i ) . "\t"; } while( $row = mysql_fetch_row( $export ) ) { $line = ''; foreach( $row as $value ) { if ( ( !isset( $value ) ) || ( $value == "" ) ) { $value = "\t"; } else { $value = str_replace( '"' , '""' , $value ); $value = '"' . $value . '"' . "\t"; } $line .= $value; } $data .= trim( $line ) . "\n"; } $data = str_replace( "\r" , "" , $data ); if ( $data == "" ) { $data = "\n(0) Records Found!\n"; } header("Content-type: application/octet-stream"); header("Content-Disposition: attachment; filename=your_desired_name.xls"); header("Pragma: no-cache"); header("Expires: 0"); print "$header\n$data"; ?> The code above is used for generating an Excel spreadsheet from a MySQL database, but we are getting following error: The file you are trying to open, 'users.xls', is in a different format than specified by the file extension. Verify that the file is not corrupted and is from a trusted source before opening the file. Do you want to open the file now? What is the problem and how do we fix it?

    Read the article

  • t-sql most efficient row to column? for xml path, pivot

    - by ajberry
    create table _orders ( OrderId int identity(1,1) primary key nonclustered ,CustomerId int ) create table _details ( DetailId int identity(1,1) primary key nonclustered ,OrderId int ,ProductId int ) insert into _orders (CustomerId) select 1 union select 2 union select 3 insert into _details (OrderId,ProductId) select 1,100 union select 1,158 union select 1,234 union select 2,125 union select 3,105 union select 3,101 union select 3,212 union select 3,250 -- select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o I am looking for the most performant way to turn rows into columns. I have a requirement to output the contents of the db (not actual schema above, but concept is similar) in both fixed width and delimited formats. The above FOR XML PATH query gives me the result I want, but when dealing with anything other than small amounts of data, can take awhile. I've looked at pivot but most of the examples I have found are aggregating information. I just to combine the child rows and tack them onto the parent. For example, for an order it would need to output: OrderId,Product1,Product2,Product3,etc Thoughts or suggestions? I am using SQL Server 2k5.

    Read the article

  • How to put large text data (~20mb) into sql cs 3.5 database?

    - by Anindya Chatterjee
    I am using following query to insert some large text data : internal static string InsertStorageItem = "insert into Storage(FolderName, MessageId, MessageDate, StorageData) values ('{0}', '{1}', '{2}', @StorageData)"; and the code I am using to execute this query is as follows : string content = "very very large data"; string query = string.Format(InsertStorageItem, "Inbox", "AXOGTRR1445/DSDS587444WEE", "4/19/2010 11:11:03 AM"); var command = new SqlCeCommand(query, _sqlConnection); var paramData = command.Parameters.Add("@StorageData", System.Data.SqlDbType.NText); paramData.Value = content; paramData.SourceColumn = "StorageData"; command.ExecuteNonQuery(); But at the last line I am getting this following error : System.Data.SqlServerCe.SqlCeException was unhandled by user code Message=The data was truncated while converting from one data type to another. [ Name of function(if known) = ] Source=SQL Server Compact ADO.NET Data Provider HResult=-2147467259 NativeError=25920 StackTrace: at System.Data.SqlServerCe.SqlCeCommand.ProcessResults(Int32 hr) at System.Data.SqlServerCe.SqlCeCommand.ExecuteCommandText(IntPtr& pCursor, Boolean& isBaseTableCursor) at System.Data.SqlServerCe.SqlCeCommand.ExecuteCommand(CommandBehavior behavior, String method, ResultSetOptions options) at System.Data.SqlServerCe.SqlCeCommand.ExecuteNonQuery() at Chithi.Client.Exchange.ExchangeClient.SaveItem(Item item, Folder parentFolder) at Chithi.Client.Exchange.ExchangeClient.DownloadNewMails(Folder folder) at Chithi.Client.Exchange.ExchangeClient.SynchronizeParentChildFolder(WellKnownFolder wellknownFolder, Folder parentFolder) at Chithi.Client.Exchange.ExchangeClient.SynchronizeFolders() at Chithi.Client.Exchange.ExchangeClient.WorkerThreadDoWork(Object sender, DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.OnDoWork(DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) InnerException: Now my question is how am I supposed to insert such large data to sqlce db? Regards, Anindya Chatterjee http://abstractclass.org

    Read the article

  • dynamically created controls and responding to control events

    - by Dirk
    I'm working on an ASP.NET project in which the vast majority of the forms are generated dynamically at run time (form definitions are stored in a DB for customizability). Therefore, I have to dynamically create and add my controls to the Page every time OnLoad fires, regardless of IsPostBack. This has been working just fine and .NET takes care of managing ViewState for these controls. protected override void OnLoad(EventArgs e) { base.OnLoad(e); RenderDynamicControls() } private void RenderDynamicControls(){ //1. call service layer to retrieve form definition //2. create and add controls to page container } I have a new requirement in which if a user clicks on a given button (this button is created at design time) the page should be re-rendered in a slightly different way. So in addition to the code that executes in OnLoad (i.e. RenderDynamicControls()), I have this code: protected void MyButton_Click(object sender, EventArgs e) { RenderDynamicControlsALittleDifferently() } private void RenderDynamicControlsALittleDifferently() (){ //1. clear all controls from the page container added in RenderDynamicControls() //2. call service layer to retrieve form definition //3. create and add controls to page container } My question is, is this really the only way to accomplish what I'm after? It seems beyond hacky to effectively render the form twice simply to respond to a button click. I gather from my research that this is simply how the page-lifecycle works in ASP.NET: Namely, that OnLoad must fire on every Postback before child events are invoked. Still, it's worthwhile to check with the SO community before having to drink the kool-aid. On a related note, once I get this feature completed, I'm planning on throwing an UpdatePanel on the page to perform the page updates via Ajax. Any code/advice that make that transition easier would be much appreciated. Thanks

    Read the article

  • Preventing a button from responding to 'Enter' in ASP.net

    - by kd7iwp
    I have a master template that all the pages on my site use. In the template there is an empty panel. In the code-behind for the page an imagebutton is created in the panel dynamically in my Page_Load section (makes a call to the DB to determine which button should appear via my controller). On some pages that use this template and have forms on them, pressing the Enter key fires the click event on this imagebutton rather than the submit button in the form. Is there a simple way to prevent the imagebutton click event from firing unless it's clicked by the mouse? I'm thinking a javascript method is a hack, especially since the button doesn't even exist in the master template until the button is dynamically created on Page_Load (this is ugly since I can't simply do <% =btnName.ClientId %> to refer to the button's name in my aspx page). I tried setting a super-high tabindex for the image button and that did nothing. Also set the button to be the DefaultButton in its panel on the master template but that did not work either.

    Read the article

  • Commit is VERY slow in my NHibernate / SQLite project

    - by Tom Bushell
    I've just started doing some real-world performance testing on my Fluent NHibernate / SQLite project, and am experiencing some serious delays when when I Commit to the database. By serious, I mean taking 20 - 30 seconds to Commit 30 K of data! This delay seems to get worse as the database grows. When the SQLite DB file is empty, commits happen almost instantly, but when it grows to 10 Meg, I see these huge delays. The database has 16 tables, averaging 10 columns each. One possible problem is that I'm storing a dozen or so IList members, but they are typically only 200 elements long. But this is a recent addition to Fluent NHibernate automapping, which stores each float in a single table row, so maybe that's a potential problem. Any suggestions on how to track this down? I suspect SQLite is the culprit, but maybe it's NHibernate? I don't have any experience with profilers, but am thinking of getting one. I'm aware of NHibernate Profiler - any recommendations for profilers that work well with SQLite? Here's the method that saves the data - it's just a SaveOrUpdate call and a Commit, if you ignore all the error handling and debug logging. public static void SaveMeasurement(object measurement) { Debug.WriteLine("\r\n---SaveMeasurement---"); // Get the application's database session var session = GetSession(); using (var transaction = session.BeginTransaction()) { try { session.SaveOrUpdate(measurement); } catch (Exception e) { throw new ApplicationException( "\r\n SaveMeasurement->SaveOrUpdate failed\r\n\r\n", e); } try { Debug.WriteLine("\r\n---Commit---"); transaction.Commit(); Debug.WriteLine("\r\n---Commit Complete---"); } catch (Exception e) { throw new ApplicationException( "\r\n SaveMeasurement->Commit failed\r\n\r\n", e); } } }

    Read the article

  • How can I protect this code from SQL Injection? A bit confused.

    - by Craig Whitley
    I've read various sources but I'm unsure how to implement them into my code. I was wondering if somebody could give me a quick hand with it? Once I've been shown how to do it once in my code I'll be able to pick it up I think! This is from an AJAX autocomplete I found on the net, although I saw something to do with it being vulnerable to SQL Injection due to the '%$queryString%' or something? Any help really appreciated! if ( isset( $_POST['queryString'] ) ) { $queryString = $_POST['queryString']; if ( strlen( $queryString ) > 0 ) { $query = "SELECT game_title, game_id FROM games WHERE game_title LIKE '%$queryString%' || alt LIKE '%$queryString%' LIMIT 10"; $result = mysql_query( $query, $db ) or die( "There is an error in database please contact [email protected]" ); while ( $row = mysql_fetch_array( $result ) ) { $game_id = $row['game_id']; echo '<li onClick="fill(\'' . $row['game_title'] . '\',' . $game_id . ');">' . $row['game_title'] . '</li>'; } } }

    Read the article

  • Compile Assembly Output generated by VC++?

    - by SDD
    I have a simple hello world C program and compile it with /FA. As a consequence, the compiler also generates the corresponding assembly listing. Now I want to use masm/link to assemble an executable from the generated .asm listing. The following command line yields 3 linker errors: \masm32\bin\ml /I"C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include" /c /coff asm_test.asm \masm32\bin\link /SUBSYSTEM:CONSOLE /LIBPATH:"C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\lib" asm_test.obj indicating that the C-runtime functions were not linked to the object files produced earlier: asm_test.obj : error LNK2001: unresolved external symbol @__security_check_cookie@4 asm_test.obj : error LNK2001: unresolved external symbol _printf LINK : error LNK2001: unresolved external symbol _wmainCRTStartup asm_test.exe : fatal error LNK1120: 3 unresolved externals Here is the generated assembly listing ; Listing generated by Microsoft (R) Optimizing Compiler Version 15.00.30729.01 TITLE c:\asm_test\asm_test\asm_test.cpp .686P .XMM include listing.inc .model flat INCLUDELIB OLDNAMES PUBLIC ??_C@_0O@OBPALAEI@hello?5world?$CB?6?$AA@ ; `string' EXTRN @__security_check_cookie@4:PROC EXTRN _printf:PROC ; COMDAT ??_C@_0O@OBPALAEI@hello?5world?$CB?6?$AA@ CONST SEGMENT ??_C@_0O@OBPALAEI@hello?5world?$CB?6?$AA@ DB 'hello world!', 0aH, 00H ; `string' CONST ENDS PUBLIC _wmain ; Function compile flags: /Ogtpy ; COMDAT _wmain _TEXT SEGMENT _argc$ = 8 ; size = 4 _argv$ = 12 ; size = 4 _wmain PROC ; COMDAT ; File c:\users\octon\desktop\asm_test\asm_test\asm_test.cpp ; Line 21 push OFFSET ??_C@_0O@OBPALAEI@hello?5world?$CB?6?$AA@ call _printf add esp, 4 ; Line 22 xor eax, eax ; Line 23 ret 0 _wmain ENDP _TEXT ENDS END I am using the latest masm32 version (6.14.8444).

    Read the article

  • Castle Active Record - Working with the cache

    - by David
    Hi All, im new to the Castle Active Record Pattern and Im trying to get my head around how to effectivley use cache. So what im trying to do (or want to do) is when calling the GetAll, find out if I have called it before and check the cache, else load it, but I also want to pass a bool paramter that will force the cache to clear and requery the db. So Im just looking for the final bits. thanks public static List<Model.Resource> GetAll(bool forceReload) { List<Model.Resource> resources = new List<Model.Resource>(); //Request to force reload if (forceReload) { //need to specify to force a reload (how?) XmlConfigurationSource source = new XmlConfigurationSource("appconfig.xml"); ActiveRecordStarter.Initialize(source, typeof(Model.Resource)); resources = Model.Resource.FindAll().ToList(); } else { //Check the cache somehow and return the cache? } return resources; } public static List<Model.Resource> GetAll() { return GetAll(false); }

    Read the article

  • Using MongoDB's map/reduce to "group by" two fields

    - by ibz
    I need something slightly more complex than the examples in the MongoDB docs and I can't seem to be able to wrap my head around it. Say I have a collection of objects of the form {date: "2010-10-10", type: "EVENT_TYPE_1", user_id: 123, ...} Now I want to get something similar to a SQL GROUP BY query, grouping over both date and type. That is, I want the number of events of each type in each day. Also, I'd like to make it unique by user_id, ie. if a user has more events in the same day, count it only once. I'm trying to do this with map/reduce. I do db.logs.mapReduce(function() { emit(this.type, 1); }, function(k, vals) { var total = 0; for (var i = 0; i < vals.length; i++) total += vals[i]; return total; }}) which nicely groups by type, but now, how can I group by date at the same time? Seems the key in emit() can't be an array (I thought about doing emit([this.date, this.type], 1)). Also, how can I ensure the per-user uniqueness? I'm just starting with MongoDB and I'm still having trouble grasping the basic concepts. Also, there is not much documentation available out there. Any help from more experienced users is appreciated. Thanks!

    Read the article

  • Saving and retrieving image in SQL database from C# problem

    - by Mobin
    I used this code for inserting records in a person table in my DB System.IO.MemoryStream ms = new System.IO.MemoryStream(); Image img = Image_Box.Image; img.Save(ms, System.Drawing.Imaging.ImageFormat.Bmp); this.personTableAdapter1.Insert(NIC_Box.Text.Trim(), Application_Box.Text.Trim(), Name_Box.Text.Trim(), Father_Name_Box.Text.Trim(), DOB_TimePicker.Value.Date, Address_Box.Text.Trim(), City_Box.Text.Trim(), Country_Box.Text.Trim(), ms.GetBuffer()); but when i retrieve this with this code byte[] image = (byte[])Person_On_Application.Rows[0][8]; MemoryStream Stream = new MemoryStream(); Stream.Write(image, 0, image.Length); Bitmap Display_Picture = new Bitmap(Stream); Image_Box.Image = Display_Picture; it works perfectly fine but if i update this with my own generated Query like System.IO.MemoryStream ms = new System.IO.MemoryStream(); Image img = Image_Box.Image; img.Save(ms, System.Drawing.Imaging.ImageFormat.Bmp); Query = "UPDATE Person SET Person_Image ='"+ms.GetBuffer()+"' WHERE (Person_NIC = '"+NIC_Box.Text.Trim()+"')"; the next time i use the same code for retrieving the image and displaying it as used above . Program throws an exception

    Read the article

  • Business entity: private instance VS single instance

    - by taoufik
    Suppose my WinForms application has a business entity Order, the entity is used in multiple views, each view handles a different domain or use-case in the application. As an example, one managing orders, the other one digging into one order and displaying additional data. If I'd use nHibernate (or any other ORM) and use one session/dataContext per view (or per db action), I'd end up getting two different instances for the same Order (let's say orderId = 1). Although functionally the same entity, they are technically two different instances. Yes, I could implement Equals/GetHashcode to make them "seem" the same. Why would you go for a single instance per entity vs private instances per view or per use-case? Having single instances has the advantage of sharing INotifyPropertyChanged events, and sharing additional (non-persistent) data. Having a private instance in each view would give you the flexibility of the undo functionality on a view level. In the example above, I'd allow the user to change order details, and give them the flexibility to not save the change. Here, synchronisation between the view/use-case happens on a data persistence level. What would your argument be?

    Read the article

  • MYSQL not running on Ubuntu OS - Error 2002.

    - by mgj
    Hi, I am a novice to mysql DB. I am trying to run the MYSQL Server on Ubuntu 10.04. Through Synaptic Package Manager I am have installed the mysql version: mysql-client-5.1 I wonder that how was the database password set for the mysql-client software that I installed through the above way.It would be nice if you could enlighten me on this. When I tried running this database, I encountered the error given below: mohnish@mohnish-laptop:/var/lib$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) mohnish@mohnish-laptop:/var/lib$ I referred to a similar question posted by another user. I didn't find a solution through the proposed answers. For instance when I tried the solutions posted for the similar question I got the following: mohnish@mohnish-laptop:/var/lib$ service start mysqld start: unrecognized service mohnish@mohnish-laptop:/var/lib$ ps -u mysql ERROR: User name does not exist. ********* simple selection ********* ********* selection by list ********* -A all processes -C by command name -N negate selection -G by real group ID (supports names) -a all w/ tty except session leaders -U by real user ID (supports names) -d all except session leaders -g by session OR by effective group name -e all processes -p by process ID T all processes on this terminal -s processes in the sessions given a all w/ tty, including other users -t by tty g OBSOLETE -- DO NOT USE -u by effective user ID (supports names) r only running processes U processes for specified users x processes w/o controlling ttys t by tty *********** output format ********** *********** long options *********** -o,o user-defined -f full --Group --User --pid --cols --ppid -j,j job control s signal --group --user --sid --rows --info -O,O preloaded -o v virtual memory --cumulative --format --deselect -l,l long u user-oriented --sort --tty --forest --version -F extra full X registers --heading --no-heading --context ********* misc options ********* -V,V show version L list format codes f ASCII art forest -m,m,-L,-T,H threads S children in sum -y change -l format -M,Z security data c true command name -c scheduling class -w,w wide output n numeric WCHAN,UID -H process hierarchy mohnish@mohnish-laptop:/var/lib$ which mysql /usr/bin/mysql mohnish@mohnish-laptop:/var/lib$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I even tried referring to http://forums.mysql.com/read.php?11,27769,84713#msg-84713 but couldn't find anything useful. Please let me know how I could tackle this error. Thank you very much..

    Read the article

  • spring 3 uploadify giving 404 Error

    - by Rajkumar
    I am using Spring 3 and implementing Uploadify. The problem is, the files are updating properly but it is giving HTTP Error 404, on completion of file upload. I tried every possible solution, but none of them works. The files are uploaded. Values are storing in DB properly, only that i am getting HTTP Error 404. Any help is appreciated and Thanks in advance. The JSP Page $(function() { $('#file_upload').uploadify({ 'swf' : 'scripts/uploadify.swf', 'fileObjName' : 'the_file', 'fileTypeExts' : '*.gif; *.jpg; *.jpeg; *.png', 'multi' : true, 'uploader' : '/photo/savePhoto', 'fileSizeLimit' : '10MB', 'uploadLimit' : 50, 'onUploadStart' : function(file) { $('#file_upload').uploadify('settings', 'formData', {'trip_id' :'1', 'trip_name' :'Sample Trip', 'destination_trip' :'Mumbai','user_id' :'1','email' :'[email protected]','city_id' :'12'}); }, 'onQueueComplete' : function(queueData) { console.log('queueData : '+queueData); window.location.href = "trip/details/1"; } }); }); The Controller @RequestMapping(value="photo/{action}", method=RequestMethod.POST) public String postHandler(@PathVariable("action") String action, HttpServletRequest request) { if(action.equals("savePhoto")) { try{ MultipartHttpServletRequest multipartRequest = (MultipartHttpServletRequest)request; MultipartFile file = multipartRequest.getFile("the_file"); String trip_id = request.getParameter("trip_id"); String trip_name = request.getParameter("trip_name"); String destination_trip = request.getParameter("destination_trip"); String user_id = request.getParameter("user_id"); String email = request.getParameter("email"); String city_id = request.getParameter("city_id"); photo.savePhoto(file,trip_id,trip_name,destination_trip,user_id,email,city_id); photo.updatetrip(photo_id,trip_id); }catch(Exception e ){e.printStackTrace();} } return ""; } spring config <bean class="org.springframework.web.multipart.commons.CommonsMultipartResolver" id="multipartResolver"> <property name="maxUploadSize" value="10000000"/> </bean>

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >