Search Results

Search found 7884 results on 316 pages for 'ben record'.

Page 44/316 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Unity does not display properly [closed]

    - by Ben Isaacs
    Ubuntu Unity on the Ubuntu netbook edition installs without a problem but when I log in though, where the panels are, there is just blank area with a shadow effect style thing on it. I can open apps but the Window buttons and the global menu are not visable. Does this mean that I cannot use Unity. I'm running it through Wubi on a Dell Inspiron 1501 laptop. My total ammount of graphics memory is 831MB My dedicated memory is 128MB. The odd thing is is that Windows Aero displays without a problem so why is there a problem with Unity Hope this helps Ben

    Read the article

  • Java Desktop Application For Network users

    - by Motasem Abu Aker
    I'm developing a desktop application using Java. My application will run in a network environment where multiple users will access the same database through the application. There will be basic CRUD opreations (Insert, Update, Delete, & select), which means there will be chances of deadlock, or two users trying to update the same record at same time. I'm using the following Java Swing for Clients (MVC). MySQL Server for database (InnODB). Java Web start. Now, MySQL is centralized on the network, and all of the clients connect to it. The Application for ERP Purpose. I searched the internet to find a very good solution to ensure data integrity & to make sure that when updating one record from one client, other clients are aware of it. I read about Socket-server-client & RESTful web services. I don't want to go web application & don't want to use any extra libraries. So how can I handle this scenario: If User A updates a record: Is there a way to update User B's screen with the new value? If user A starts updating a record, how can I prevent other users from attempting to update the same record?

    Read the article

  • How to display a person record just after saving his data to iPhone Address Book?

    - by camelCase
    this is my code and it works flawless, where my_value is a string with separator ','. everythign works fin but i'd like to display the person record from the address book after i saved it, so in the function if(isSaved) { // **** code here *** } here the complete function - (void) addToAgenda: (NSString*) my_value{ //NSArray *strings = [my_value componentsSeparatedByString: @","]; NSArray *dati=[[NSArray alloc] initWithArray:[my_value componentsSeparatedByString:@","]]; NSString *userwebsite = [dati objectAtIndex:0]; NSString *fname = [dati objectAtIndex:1]; NSString *lname = [dati objectAtIndex:2]; NSString *useremail = [dati objectAtIndex:3];; NSString *usermobile = [dati objectAtIndex:4]; NSString *usercompany = @"xxx"; ABRecordRef aRecord = ABPersonCreate(); CFErrorRef anError = NULL; // fisrst name ABRecordSetValue(aRecord, kABPersonFirstNameProperty, fname, &anError); // last name ABRecordSetValue(aRecord, kABPersonLastNameProperty, lname, &anError); // Phone Number. ABMutableMultiValueRef multi = ABMultiValueCreateMutable(kABMultiStringPropertyType); ABMultiValueAddValueAndLabel(multi, (CFStringRef)usermobile, kABWorkLabel, NULL); ABRecordSetValue(aRecord, kABPersonPhoneProperty, multi, &anError); CFRelease(multi); // Company ABRecordSetValue(aRecord, kABPersonDepartmentProperty, usercompany, &anError); // email NSLog(@"%@", useremail); ABMutableMultiValueRef multiemail = ABMultiValueCreateMutable(kABMultiStringPropertyType); ABMultiValueAddValueAndLabel(multiemail, (CFStringRef)useremail, kABWorkLabel, NULL); ABRecordSetValue(aRecord, kABPersonEmailProperty, multiemail, &anError); CFRelease(multiemail); // website NSLog(@"%@", userwebsite); ABMutableMultiValueRef multiweb = ABMultiValueCreateMutable(kABMultiStringPropertyType); ABMultiValueAddValueAndLabel(multiweb, (CFStringRef)userwebsite, kABWorkLabel, NULL); ABRecordSetValue(aRecord, kABPersonURLProperty, multiweb, &anError); CFRelease(multiemail); if (anError != NULL) NSLog(@"error while creating.."); CFStringRef personname, personlname, personcompind, personemail, personwebsite, personcontact; personname = ABRecordCopyValue(aRecord, kABPersonFirstNameProperty); personlname = ABRecordCopyValue(aRecord, kABPersonLastNameProperty); personcompind = ABRecordCopyValue(aRecord, kABPersonDepartmentProperty); personemail = ABRecordCopyValue(aRecord, kABPersonEmailProperty); personwebsite = ABRecordCopyValue(aRecord, kABPersonURLProperty); personcontact = ABRecordCopyValue(aRecord, kABPersonPhoneProperty); ABAddressBookRef addressBook; CFErrorRef error = NULL; addressBook = ABAddressBookCreate(); BOOL isAdded = ABAddressBookAddRecord (addressBook, aRecord, &error); if(isAdded){ NSLog(@"added.."); } if (error != NULL) { NSLog(@"ABAddressBookAddRecord %@", error); } error = NULL; BOOL isSaved = ABAddressBookSave (addressBook, &error); if(isSaved) { // **** code here *** } if (error != NULL) { NSLog(@"ABAddressBookSave %@", error); UIAlertView *alertOnChoose = [[UIAlertView alloc] initWithTitle:@"Unable to save this time" message:nil delegate:self cancelButtonTitle:nil otherButtonTitles:@"Ok", nil]; [alertOnChoose show]; [alertOnChoose release]; } CFRelease(aRecord); CFRelease(personname); CFRelease(personlname); CFRelease(personcompind); CFRelease(personcontact); CFRelease(personemail); CFRelease(personwebsite); CFRelease(addressBook); }

    Read the article

  • Increase the size of a memory mapped file

    - by sandun dhammika
    I am maintaning a memory mapped file to store my tree like datastructure. When I'm updating the datastructure ,I got this problem. The file is limited on it's size and can't be too long or too small. I have a methods like void mapfile_insert_record(RECORD* /* record*/); void mapfile_modify_record(RECORD* /* record*/); Both operations could lead to exceed the space which is free on memory file. How do I overcome this? What strategy I should use. calculate whether it requires to exceed the file as a pre-condition on both methods. Dynamically exceed it , for a example manage a timer and constantly polling file for it's free avaliable size and then automatically extend it. Any ideas or patterns to overcome this problem?

    Read the article

  • what is the best program to capture my workings as an AVI or MPEG

    - by raihanchy
    I have already used recordmydesktop, xvidcap and kazam. My sound working fine with other audio or videos. xvidcap doesn't record sound at all. I have tried many ways. If I try as: 'padsp xvidcap', it also gives error, like: /dev/dsp cannot found or missing. I have changed it to /dev/snd. Still no effect. Even I can record sound through gnome-sound-recorder - after pressing record button, I open pavucontrol. Then from Recording tab, I choose 'Monitor of Analog Stereo'. But if I run xvidcap, I don't get that option in pavucontrol. kazam works a bit slow. It records at the beginning of the captured video. But for unknown reason, it eventually the sounds just go off. Also the video is not smooth as xvidcap. Though Kazam output as H64/MP4. Record my Desktop also doesn't give sound. Can you guys please help me, either - how to get sound with xvidcap or how kazam could be record nicely. I am looking something Camtasia, as used for Windows. Thanks in advance. Raihan

    Read the article

  • Explanation of various domain name records?

    - by Kumar
    At the time of hosting, normally we just change name servers in the domain control panel. It's fine if both mail and web servers are the same. When they're different, we need to change the DNS records. When I try to point my blog to my domain name, I came to know about the various types of DNS records - A Records, AAA Records, MX Records, CNAME Records, NS Records, TXT Records, SRV Records, SOA Records, etc. I searched on Google, but would like to know more about these deeply. I found this link on the Internet - http://www.directnic.com/help/faq/?question_id=103 and got some idea about the different DNS records. But I have some more questions. How do the domain name records work? Is there any difference between NS record and other records in the way they work? Where should the NS record point to when using A record, CNAME record and MX record?

    Read the article

  • What am I doing wrong with this use of StructLayout( LayoutKind.Explicit ) when calling a PInvoke st

    - by csharptest.net
    The following is a complete program. It works fine as long as you don't uncomment the '#define BROKEN' at the top. The break is due to a PInvoke failing to marshal a union correctly. The INPUT_RECORD structure in question has a number of substructures that might be used depending on the value in EventType. What I don't understand is that when I define only the single child structure of KEY_EVENT_RECORD it works with the explicit declaration at offset 4. But when I add the other structures at the same offset the structure's content get's totally hosed. //UNCOMMENT THIS LINE TO BREAK IT: //#define BROKEN using System; using System.Runtime.InteropServices; class ConIOBroken { static void Main() { int nRead = 0; IntPtr handle = GetStdHandle(-10 /*STD_INPUT_HANDLE*/); Console.Write("Press the letter: 'a': "); INPUT_RECORD record = new INPUT_RECORD(); do { ReadConsoleInputW(handle, ref record, 1, ref nRead); } while (record.EventType != 0x0001/*KEY_EVENT*/); Assert.AreEqual((short)0x0001, record.EventType); Assert.AreEqual(true, record.KeyEvent.bKeyDown); Assert.AreEqual(0x00000000, record.KeyEvent.dwControlKeyState & ~0x00000020);//strip num-lock and test Assert.AreEqual('a', record.KeyEvent.UnicodeChar); Assert.AreEqual((short)0x0001, record.KeyEvent.wRepeatCount); Assert.AreEqual((short)0x0041, record.KeyEvent.wVirtualKeyCode); Assert.AreEqual((short)0x001e, record.KeyEvent.wVirtualScanCode); } static class Assert { public static void AreEqual(object x, object y) { if (!x.Equals(y)) throw new ApplicationException(); } } [DllImport("Kernel32.dll", CharSet = CharSet.Unicode, SetLastError = true)] public static extern IntPtr GetStdHandle(int nStdHandle); [DllImport("Kernel32.dll", CharSet = CharSet.Unicode, SetLastError = true)] public static extern bool ReadConsoleInputW(IntPtr hConsoleInput, ref INPUT_RECORD lpBuffer, int nLength, ref int lpNumberOfEventsRead); [StructLayout(LayoutKind.Explicit)] public struct INPUT_RECORD { [FieldOffset(0)] public short EventType; //union { [FieldOffset(4)] public KEY_EVENT_RECORD KeyEvent; #if BROKEN [FieldOffset(4)] public MOUSE_EVENT_RECORD MouseEvent; [FieldOffset(4)] public WINDOW_BUFFER_SIZE_RECORD WindowBufferSizeEvent; [FieldOffset(4)] public MENU_EVENT_RECORD MenuEvent; [FieldOffset(4)] public FOCUS_EVENT_RECORD FocusEvent; //} #endif } [StructLayout(LayoutKind.Sequential)] public struct KEY_EVENT_RECORD { public bool bKeyDown; public short wRepeatCount; public short wVirtualKeyCode; public short wVirtualScanCode; public char UnicodeChar; public int dwControlKeyState; } [StructLayout(LayoutKind.Sequential)] public struct MOUSE_EVENT_RECORD { public COORD dwMousePosition; public int dwButtonState; public int dwControlKeyState; public int dwEventFlags; }; [StructLayout(LayoutKind.Sequential)] public struct WINDOW_BUFFER_SIZE_RECORD { public COORD dwSize; } [StructLayout(LayoutKind.Sequential)] public struct MENU_EVENT_RECORD { public int dwCommandId; } [StructLayout(LayoutKind.Sequential)] public struct FOCUS_EVENT_RECORD { public bool bSetFocus; } [StructLayout(LayoutKind.Sequential)] public struct COORD { public short X; public short Y; } } UPDATE: For those worried about the struct declarations themselves: bool is treated as a 32-bit value the reason for offset(4) on the data is to allow for the 32-bit structure alignment which prevents the union from beginning at offset 2. Again, my problem isn't making PInvoke work at all, it's trying to figure out why these additional structures (supposedly at the same offset) are fowling up the data by simply adding them.

    Read the article

  • Destroying a record via RJS TemplateError (Called ID for nil...)

    - by bgadoci
    I am trying to destroy a record in my table via RJS and having some trouble. I have successfully implemented this before so can't quite understand what is not working here. Here is the setup: I am trying to allow a user of my app to select an answer from another user as the 'winning' answer to their question. Much like StackOverflow does. I am calling this selected answer 'winner'. class Winner < ActiveRecord::Base belongs_to :site belongs_to :user belongs_to :question validates_uniqueness_of :user_id, :scope => [:question_id] end I'll spare you the reverse has_many associations but I believe they are correct (I am using has_many with the validation as I might want to allow for multiple later). Also, think of site like an answer to the question. My link calling the destroy action of the WinnersController is located in the /views/winners/_winner.html.erb and has the following code: <% div_for winner do %> Selected <br/> <%=link_to_remote "Destroy", :url => winner, :method => :delete %> <% end %> This partial is being called by another partial `/views/sites/_site.html.erb and is located in this code block: <% if site.winners.blank? %> <% remote_form_for [site, Winner.new] do |f| %> <%= f.hidden_field :question_id, :value => @question.id %> <%= f.hidden_field :winner, :value => "1" %> <%= submit_tag "Select This Answer" %> Make sure you unselect any previously selected answers. <% end %> <% else %> <div id="winner_<%= site.id %>" class="votes"> <%= render :partial => site.winners%> </div> <% end %> <div id="winner_<%= site.id %>" class="votes"> </div> And the /views/sites/_site.html.erb partial is being called in the /views/questions/show.html.erb file. My WinnersController#destroy action is the following: def destroy @winner = Winner.find(params[:id]) @winner.destroy respond_to do |format| format.html { redirect_to Question.find(params[:post_id]) } format.js end end And my /views/winners/destroy.js.rjs code is the following: page[dom_id(@winner)].visual_effect :fade I am getting the following error and not really sure where I am going wrong: Processing WinnersController#destroy (for 127.0.0.1 at 2010-05-30 16:05:48) [DELETE] Parameters: {"authenticity_token"=>"nn1Wwr2PZiS2jLgCZQDLidkntwbGzayEoHWwR087AfE=", "id"=>"24", "_"=>""} Rendering winners/destroy ActionView::TemplateError (Called id for nil, which would mistakenly be 4 -- if you really wanted the id of nil, use object_id) on line #1 of app/views/winners/destroy.js.rjs: 1: page[dom_id(@winner)].visual_effect :fade app/views/winners/destroy.js.rjs:1:in `_run_rjs_app47views47winners47destroy46js46rjs' app/views/winners/destroy.js.rjs:1:in `_run_rjs_app47views47winners47destroy46js46rjs' Rendered rescues/_trace (137.1ms) Rendered rescues/_request_and_response (0.3ms) Rendering rescues/layout (internal_server_error)

    Read the article

  • Cutting large XML file into smaller pieces in C#

    - by NDraskovic
    I have a problem that I'm working on for quite some time now. I have an XML file with over 50000 records (one record has 3 levels). This file is used by one of my applications to control document sending (the record holds, among other informations, the type of document that has to be sent to a certain person). So in my application I load the XML file into a XmlDocument, and then by using SelectNodes method, I create a XmlNodeList from which I read the data I want. The process is like this - our worker takes the persons ID card (simple eith barcode) and reads it with barcode reader. When the barcode value has been read, my application finds the person with that ID in the XML file, and stores the type of the document into a string variable. Then the worker takes the document and reads its barcode, and if the value of documents barcode and the value in the value in the string variable match, the application makes a record that document of type xxxxxxxx will be sent to the person with ID yyyyyyyyy. This is very simple code, it works perfectly for now, and this is how it looks: On textBox1_TextChanged event (worker read persons ID): foreach(XmlNode node in NodeList){ if(String.Compare(node.Attributes.GetNamedItem("ID").Value.ToString(),textBox1.Text)==0) { ControlString = node.ChildNode[3].FirstChild.Attributes.GetNamedItem("doctype").Value.ToString(); break; } } textBox2.Focus(); And on textBox2_TextChanged event (worker read the documents barcode): if(String.Compare(textBox2.Text,ControlString)==0) { //Create a record and insert it into a SQL database } My question is - how will my application perform with larger XML files (I was told that the XML file might be up to 500,000 records large), will this approach be valid, or will I need to cut the file into smaller files. If I have to cut it, please give me an idea with some code samples, I've tried to do it like this: Reading entire record and storing it into a string: private void WriteXml(XmlNode record) { tempXML = record.InnerXml; temp = "<" + record.Name + " code=\"" + record.Attributes.GetNamedItem("code").Value + "\">" + Environment.NewLine; temp += tempXML + Environment.NewLine; temp += "</" + record.Name + ">"; SmallerXMLDocument += temp + Environment.NewLine; temp = ""; i++; } tempXML, temp and SmallerXMLDocument are all string variables. And then in button_Click method I load the XML file into a XmlNodeList (again by using XmlDocument.SelectNodes method) and I try to create one big string value that would hold all records like this: foreach(XmlNode node in nodes) { if(String.Compare(node.ChildNode[3].FirstChild.Attributes.GetNamedItem("doctype").Value.ToString(),doctype1)==0) { WriteXML(node); } } My idea was to create a string value (in this case called SmallerXmlDocument), and when I pass trough the entire XML file, to simply copy the value of that string into a new file. This works, but only for files that have up to 2000 records (and my has way more than that). So, if I need to cut the file into smaller pieces, what would be the best way to do it (keep in mind that there could be up to half a million records in a XML file)? Thanks

    Read the article

  • Is there a library available which easily can record and replay results of API calls?

    - by Billy ONeal
    I'm working on writing various things that call relatively complicated Win32 API functions. Here's an example: //Encapsulates calling NtQuerySystemInformation buffer management. WindowsApi::AutoArray NtDll::NtQuerySystemInformation( SystemInformationClass toGet ) const { AutoArray result; ULONG allocationSize = 1024; ULONG previousSize; NTSTATUS errorCheck; do { previousSize = allocationSize; result.Allocate(allocationSize); errorCheck = WinQuerySystemInformation(toGet, result.GetAs<void>(), allocationSize, &allocationSize); if (allocationSize <= previousSize) allocationSize = previousSize * 2; } while (errorCheck == 0xC0000004L); if (errorCheck != 0) { THROW_MANUAL_WINDOWS_ERROR(WinRtlNtStatusToDosError(errorCheck)); } return result; } //Client of the above. ProcessSnapshot::ProcessSnapshot() { using Dll::NtDll; NtDll ntdll; AutoArray systemInfoBuffer = ntdll.NtQuerySystemInformation( NtDll::SystemProcessInformation); BYTE * currentPtr = systemInfoBuffer.GetAs<BYTE>(); //Loop through the results, creating Process objects. SYSTEM_PROCESSES * asSysInfo; do { // Loop book keeping asSysInfo = reinterpret_cast<SYSTEM_PROCESSES *>(currentPtr); currentPtr += asSysInfo->NextEntryDelta; //Create the process for the current iteration and fill it with data. std::auto_ptr<ProcImpl> currentProc(ProcFactory( static_cast<unsigned __int32>(asSysInfo->ProcessId), this)); NormalProcess* nptr = dynamic_cast<NormalProcess*>(currentProc.get()); if (nptr) { nptr->SetProcessName(asSysInfo->ProcessName); } // Populate process threads for(ULONG idx = 0; idx < asSysInfo->ThreadCount; ++idx) { SYSTEM_THREADS& sysThread = asSysInfo->Threads[idx]; Thread thread( currentProc.get(), static_cast<unsigned __int32>(sysThread.ClientId.UniqueThread), sysThread.StartAddress); currentProc->AddThread(thread); } processes.push_back(currentProc); } while(asSysInfo->NextEntryDelta != 0); } My problem is in mocking out the NtDll::NtQuerySystemInformation method -- namely, that the data structure returned is complicated (Well, here it's actually relatively simple but it can be complicated), and writing a test which builds the data structure like the API call does can take 5-6 times as long as writing the code that uses the API. What I'd like to do is take a call to the API, and record it somehow, so that I can return that recorded value to the code under test without actually calling the API. The returned structures cannot simply be memcpy'd, because they often contain inner pointers (pointers to other locations in the same buffer). The library in question would need to check for these kinds of things, and be able to restore pointer values to a similar buffer upon replay. (i.e. check each pointer sized value if it could be interpreted as a pointer within the buffer, change that to an offset, and remember to change it back to a pointer on replay -- a false positive rate here is acceptable) Is there anything out there that does anything like this?

    Read the article

  • Best container to store this information

    - by user2368481
    I'm trying to write a smallish system as a homework excercise, I don't have much experience with containers and I'm not sure the best way of storing this data would be: Incident Records object holds instants of Incident Report. Report is a superclass which has 3 subclasses, Police, Fire or Medical. Record must must record which of these types apply, and which response teams are to be involved. So Record has to keep track of the Report objects, the type of the report (Police, Fire or Medical) and the teams involved in the reports. I was initially thinking of an array but that wouldn't be sufficient to hold all the info. Record<>---------Report<|----------Police, Fire or Medical

    Read the article

  • Biztalk :Tagidentifier for optional records

    - by Mchandak
    I am sure many of us must have faced this issue.Problem:My flat file schema has an optional  record  and marked with a tagidentifier. we would think that the input message without that optional record will pass the schema validation. But by default Biztalk throws an error about the missing record if we try to 'Validate the instance' in the Biztalk mapper.Resolution:On the schema node, set Parser Optimization to “Complexity” instead of thedefault "Speed" optimization.

    Read the article

  • Problem using FtpWebRequest to append to file on a mainframe

    - by MusiGenesis
    I am using FtpWebRequest to append data to a mainframe file. Each record appended is 50 characters long, and I am adding them one record at a time. In our development environment, we do not have a mainframe, so my code was written and tested FTPing to a Windows-based FTP site instead of a mainframe. Initially, I was writing each record using a StreamWriter (using the stream from the FtpWebRequest) and writing each record using WriteLine (which automatically adds a CR/LF to the end). When we ran this for the first time in the test environment (in which we're writing to an actual MVS mainframe), our mainframe contact said the CR/LFs were not able to be read by his program (a green-screen mainframe program of some sort - he's sent me screen captures, which is all I know of it). I changed our code to use Write instead of WriteLine, but now my code executes successfully (i.e no thrown exceptions) when writing multiple records, but no matter how many records we append, he is only able to "see" the first record - according to his mainframe program, there is only one 50-character record in the file. I'm guessing that to fix this, I need to write some other line-delimiting character into the end of the stream (instead of CR/LF) that the mainframe will recognize as a record delimiter. Anybody know what this is, or how else I can fix this problem?

    Read the article

  • Save all xml nodes to db without looping through it

    - by AndreMiranda
    I have this xml: <Path> <Record> <ID>6534808</ID> <Distance>1.05553036073736</Distance> </Record> <Record> <ID>6542471</ID> <Distance>1.05553036073736</Distance> </Record> ... and about more 500 nodes </Path> And I'm using the code below to get all "Record" nodes: XmlNodeList paths = xDoc.SelectNodes("//Record"); And, after that, I save each record to database. But, the problem is that I'm using a foreach to loop through this nodes and the count of this nodes may be more than 500, sometimes it gets up to 1000 "Records" node. And this gets too long... Is there a way to save all of these nodes without looping through it? Thanks!!

    Read the article

  • Executing sequential stored procedures; works in query analyzer, doesn't in my .NET application

    - by evanmortland
    Hello, I have an audit record table that I am writing to. I am connecting to MyDb, which has a stored procedure called 'CreateAudit', which is a passthrough stored procedure to another database on the same machine called MyOther DB with a stored procedure called 'CreatedAudit' as well. In other words in MyDB I have CreateAudit, which does the following EXEC dbo.MyOtherDB.CreateAudit. I call the MyDb CreateAudit stored procedure from my application, using subsonic as the DAL. The first time I call it, I call it with the following (pseudocode): Result = CreateAudit(recordId, "Opened") One line after that, I call: Result2 = CreateAudit(recordId, "Closed") In my second stored procedure it is supposed to mark the record that was created by the CreateAudit(recordId, "Opened") with a status of closed. It works great if I run them independently of one another, but when they run in sequence in the application, the record is not marked as "Closed". When I run SQL profiler I see that both queries ran, and if I copy the queries out and run them from query analyzer the record gets marked as closed 100% of the time! When I run it from the application, about once every 20 times or so, the record is successfully marked closed - the other 19 times nothing happens, but I do not get an error! Is it possible for the .NET app to skip over the ouput from the first stored procedure and start executing the second stored procedure before the record in the first is created? When I add a "WAITFOR DELAY '00:00:00:003'" to the top of my stored procedure, the record is also closed 100% of the time. My head is spinning, any ideas why this is happening! Thanks for any responses, very interested in hearing how this can happen.

    Read the article

  • Architecture for data layer that uses both localStorage and a REST remote server

    - by Zack
    Anybody has any ideas or references on how to implement a data persistence layer that uses both a localStorage and a REST remote storage: The data of a certain client is stored with localStorage (using an ember-data indexedDB adapter). The locally stored data is synced with the remote server (using ember-data RESTadapter). The server gathers all data from clients. Using mathematical sets notation: Server = Client1 ? Client2 ? ... ? ClientN where, in general, a record may not be unique to a certain client. Here are some scenarios: A client creates a record. The id of the record can not set on the client, since it may conflict with a record stored on the server. Therefore a newly created record needs to be committed to the server - receive the id - create the record in localStorage. A record is updated on the server, and as a consequence the data in localStorage and in the server go out of sync. Only the server knows that, so the architecture needs to implement a push architecture (?) Would you use 2 stores (one for localStorage, one for REST) and sync between them, or use a hybrid indexedDB/REST adapter and write the sync code within the adapter? Can you see any way to avoid implementing push (Web Sockets, ...)?

    Read the article

  • .save puts NULL in id field in Rails

    - by mathee
    Here's the model file: class ProfileTag < ActiveRecord::Base def self.create_or_update(options = {}) id = options.delete(:id) record = find_by_id(id) || new record.id = id record.attributes = options puts "record.profile_id is" puts record.profile_id record.save! record end end This gives me the correct print out in my log. But it also says that there's a call to UPDATE that sets profile_id to NULL. Here's some of the output in the log file: Processing ProfilesController#update (for 127.0.0.1 at 2010-05-28 18:20:54) [PUT] Parameter: {"commit"=>"Save", ...} ?[4;36;1mProfileTag Create (0.0ms)?[0m ?[0;1mINSERT INTO `profile_tags` (`reputation_value`, `updated_at`, `tag_id`, `id`, `profile_id`, `created_at`) VALUES(0, '2010-05-29 01:20:54', 1, NULL, 4, '2010-05-29 01:20:54')?[0m ?[4;35;1mSQL (2.0ms)?[0m ?[0mCOMMIT?[0m ?[4;36;1mSQL (0.0ms)?[0m ?[0;1mBEGIN?[0m ?[4;35;1mSQL (0.0ms)?[0m ?[0mCOMMIT?[0m ?[4;36;1mProfileTag Load (0.0ms)?[0m ?[0;1mSELECT * FROM `profile_tags` WHERE (`profile_tags`.profile_id = 4) ?[0m ?[4;35;1mSQL (1.0ms)?[0m ?[0mBEGIN?[0m ?[4;36;1mProfileTag Update (0.0ms)?[0m ?[0;1mUPDATE `profile_tags` SET profile_id = NULL WHERE (profile_id = 4 AND id IN (35)) ?[0m I'm not sure I understand why the INSERT puts the value into profile_id properly, but then it sets it to NULL on an UPDATE. If you need more specifics, please let me know. I'm thinking that the save functionality does many things other than INSERTs into the database, but I don't know what I need to specify so that it will properly set profile_id.

    Read the article

  • Which isolation level should I use for the following insert-if-not-present transaction?

    - by Steve Guidi
    I've written a linq-to-sql program that essentially performs an ETL task, and I've noticed many places where parallelization will improve its performance. However, I'm concerned about preventing uniquness constraint violations when two threads perform the following task (psuedo code). Record CreateRecord(string recordText) { using (MyDataContext database = GetDatabase()) { Record existingRecord = database.MyTable.FirstOrDefault(record.KeyPredicate()); if(existingRecord == null) { existingRecord = CreateRecord(recordText); database.MyTable.InsertOnSubmit(existingRecord); } database.SubmitChanges(); return existingRecord; } } In general, this code executes a SELECT statement to test for record existance, followed by an INSERT statement if the record doesn't exist. It is encapsulated by an implicit transaction. When two threads run this code for the same instance of recordText, I want to prevent them from simultaneously determining that the record doesn't exist, thereby both attempting to create the same record. An isolation level and explicit transaction will work well, except I'm not certain which isolation level I should use -- Serializable should work, but seems too strict. Is there a better choice?

    Read the article

  • Linq, should I join those two queries together?

    - by 5YrsLaterDBA
    I have a Logins table which records when user is login, logout or loginFailed and its timestamp. Now I want to get the list of loginFailed after last login and the loginFailed happened within 24 hrs. What I am doing now is get the last login timestamp first. then use second query to get the final list. do you think I should join those two queris together? why not? why yes? var lastLoginTime = (from inRecord in db.Logins where inRecord.Users.UserId == userId && inRecord.Action == "I" orderby inRecord.Timestamp descending select inRecord.Timestamp).Take(1); if (lastLoginTime.Count() == 1) { DateTime lastInTime = (DateTime)lastLoginTime.First(); DateTime since = DateTime.Now.AddHours(-24); String actionStr = "F"; var records = from record in db.Logins where record.Users.UserId == userId && record.Timestamp >= since && record.Action == actionStr && record.Timestamp > lastInTime orderby record.Timestamp select record; }

    Read the article

  • Can a second stored procedure doing the same thing finish before first one?

    - by evanmortland
    Hello, I have an audit record table that I am writing to. I am connecting to MyDb, which has a stored procedure called 'CreateAudit', which is a passthrough stored procedure to another database on the same machine called 'CreatedAudit' as well. I call the CreateAudit stored procedure from my application, using subsonic as the DAL. The first time I call it, I call it with the following (pseudocode): Result = CreateAudit(recordId, "Opened") Right after that, I call: Result2 = CreateAudit(recordId, "Closed") In my second stored procedure it is supposed to mark the record that was created by the CreateAudit(recordId, "Opened") with a status of closed. It works great if I run them independently of one another, but when they run in sequence in the application, the record is not marked as "Closed". When I run SQL profiler I see that both queries ran, and if I copy the queries out and run them from query analyzer the record gets marked as closed 100% of the time! When I run it from the application, about once every 20 times or so, the record is successfully marked closed - the other 19 times nothing happens, but I do not get an error! Is it possible for the .NET app to skip over the ouput from the first stored procedure and start executing the second stored procedure before the record in the first is created? When I add a "WAITFOR DELAY '00:00:00:003'" to the top of my stored procedure, the record is also closed 100% of the time. My head is spinning, any ideas why this is happening! Thanks for any responses, very interested in hearing how this can happen.

    Read the article

  • MYSQL inserting records form table A into tables B and C (linked by foreign key) depending on column values in table A

    - by Chez
    Hi All, Have been searching high and low for a simple solution to a mysql insert problem. The problem is as follows: I am putting together an organisational database consisting of departments and desks. A department may or may not have n number of desks. Both departments and desks have their own table linked by a foreign key in desks to the relevant record in departments (i.e. the pk). I have a temporary table which I use to place all new department data (n records long)...In this table n number of desk records for a department follow the department record directly below. In the TEMP table, if a column department_name has a value,it is a department, if it doesn't it will have a value for the column desk and therefore will be a desk which is related to the above department. As I said there maybe several desk records until you get to the next department record. Ok, so what I want to do is the following: Insert the departments into the departments table and its desks into the desks table , generating a foreign key in the desk record to the relevant departments id. In pseudo-ish code: for each record in TEMP table if Department INSERT the record into Departments get the id of the newly created Department record and store it somewhere else if Desk INSERT the desk into the desks table with the relevant departments id as the foreignkey note once again that all departments desks directly follow the department in the TEMP Table Many Thanks

    Read the article

  • Elegant ways to print out a bunch of instance attributes in python 2.6?

    - by wds
    First some background. I'm parsing a simple file format, and wish to re-use the results in python code later, so I made a very simple class hierarchy and wrote the parser to construct objects from the original records in the text files I'm working from. At the same time I'd like to load the data into a legacy database, the loader files for which take a simple tab-separated format. The most straightforward way would be to just do something like: print "%s\t%s\t....".format(record.id, record.attr1, len(record.attr1), ...) Because there are so many columns to print out though, I thought I'd use the Template class to make it a bit easier to see what's what, i.e.: templ = Template("$id\t$attr1\t$attr1_len\t...") And I figured I could just use the record in place of the map used by a substitute call, with some additional keywords for derived values: print templ.substitute(record, attr1_len=len(record.attr1), ...) Unfortunately this fails, complaining that the record instance does not have an attribute __getitem__. So my question is twofold: do I need to implement __getitem__ and if so how? is there a more elegant way for something like this where you just need to output a bunch of attributes you already know the name for?

    Read the article

  • weird problem..the exact xml work in one host and not working in another...

    - by Ofear
    hi all! i search alot for this but can't find an aswer... I have made a working xml parser using php. till today i host my files on a free web host, and everything works just fine. today i got access to my college server and i host my files there. now for some reason.. i can't make the parser work as i was in the free host... look on those files please: working site: xml file: [http://ofear.onlinewebshop.net/asce/calendar.xml] working parser is this: [http://ofear.onlinewebshop.net/asce/calendar.php] (the lower table is the xml,it's hebrew) not working site: xml file: [http://apps.sce.ac.il/agoda/calendar.xml] not working parser is this: [http://apps.sce.ac.il/agoda/calendar.php] anyone have idea why it's not working.. those are the same files and they should work. maybe it a server problem? calendar.xml: <?xml version="1.0" encoding="UTF-8" ?> <events> <record> <event>??? ???? ????? ???? ???</event> <eventDate>30/12/2010</eventDate> <desc>?????? ?? ????</desc> </record> <record> <event>??? ???? ??????? - 2 : ???? ??? ???? ??????</event> <eventDate>22/12/2010</eventDate> <desc>????? ???? ??????? ?????? ??? ???? ??????? ?????? ????? ?????? ?? ??? ???? ??????? 2 ??????? ????? ???????? 22-23 ?????? 2010. ???? ????? ???? ????? "?????? ????"</desc> </record> <record> <event>????? ???? ?????? ?????? - ?? ????</event> <eventDate>5/12/2010</eventDate> <desc>??? ????? 17:30-20:45</desc> </record> </events> parser: <?php $doc = new DOMDocument(); $doc->load( 'calendar.xml' ); $events = $doc->getElementsByTagName( "record" ); foreach( $events as $record ) { $events = $record->getElementsByTagName( "event" ); $event = $events->item(0)->nodeValue; $eventDates= $record->getElementsByTagName( "eventDate" ); $eventDate= $eventDates->item(0)->nodeValue; $descs = $record->getElementsByTagName( "desc" ); $desc = $descs->item(0)->nodeValue; echo "<tr><td>$event</td><td>$eventDate</td><td>$desc</td></tr>"; } ?> after a little debugging i saw that it's stop here: $doc = new DOMDocument(); and it's not doing anything after that. i think that the line above is the cos

    Read the article

  • Multi-IP address zimbra server DNS PTR records and spam

    - by David Fraser
    We have a mail server running Zimbra (ZCS 6.0.8). The server has 5 active public IP addresses in the same subnet. (.226-.230). I currently have A records for each of these (host0.domain.com..host4.domain.com), with the main host.domain.com of the machine pointing to .226. Our host has ended up being listed on the SORBS DUHL list (even though it's in a server farm). According to them you can get removed quickly by checking that your host has an MX record, an A record, and a PTR record that points back to the hostname given in the MX record. I tried setting the PTR records so that each of these addresses resolved back to their A record (i.e. .228 had a PTR to host2.domain.com). However, I then got mail being rejected from other servers because when Postfix (under Zimbra control) sends out mail, it uses the main hostname for the HELO - there doesn't seem to be any way to override it. So the PTR records currently say host.domain.com for all 5 IP addresses. What's the correct way to handle this? Should I have an A record for the domain that points to all the IP addresses (for round-robin handling)? I'm nervous of changes that could cause problems, so I'm wondering what the standard way to handle a multiple-IP-address mail server is.

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >