Search Results

Search found 3052 results on 123 pages for 'jawahar sync'.

Page 108/123 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • Getting auth token for dropbox account from accountmanager in android

    - by user1490880
    I am trying to get auth token for a dropbox account configured in device from account manager. I am using accountManager.getAuthToken(account, "DROPBOX",null,Hello.this, new GetAuthTokenCallback(), null);//account" is dropbox account I am seeing a Allow/Deny page. I click on Allow, but the callback is not getting invoked at all and i dont get the auth token. I got the authtoken for a google account with this(with a different authtokentype). What i am missing. I am not sure about the authTokenType parameter for dropbox. Also are there any other parameter specific for dropbox like the bundle parameter that i am missing. Is this way possible for dropbox? Check below for the function parameters public AccountManagerFuture<Bundle> getAuthToken (Account account, String authTokenType, Bundle options, Activity activity, AccountManagerCallback<Bundle> callback, Handler handler) Link: http://developer.android.com/reference/android/accounts/AccountManager.html UPDATE I assume since we are able to create a dropbox account in android Accounts and Sync(Settings), there must be a dropbox authenticator that has all the functions in AbstractAccountAuthenticator implemented including getAuthToken(). So dropbox should support giving auth token i think. Also dropbox uses oauth1, whereas account manager uses outh 2.0. So is this an issue.Can anyone comment on this?

    Read the article

  • Achieving Thread-Safety

    - by Smasher
    Question How can I make sure my application is thread-safe? Are their any common practices, testing methods, things to avoid, things to look for? Background I'm currently developing a server application that performs a number of background tasks in different threads and communicates with clients using Indy (using another bunch of automatically generated threads for the communication). Since the application should be highly availabe, a program crash is a very bad thing and I want to make sure that the application is thread-safe. No matter what, from time to time I discover a piece of code that throws an exception that never occured before and in most cases I realize that it is some kind of synchronization bug, where I forgot to synchronize my objects properly. Hence my question concerning best practices, testing of thread-safety and things like that. mghie: Thanks for the answer! I should perhaps be a little bit more precise. Just to be clear, I know about the principles of multithreading, I use synchronization (monitors) throughout my program and I know how to differentiate threading problems from other implementation problems. But nevertheless, I keep forgetting to add proper synchronization from time to time. Just to give an example, I used the RTL sort function in my code. Looked something like FKeyList.Sort (CompareKeysFunc); Turns out, that I had to synchronize FKeyList while sorting. It just don't came to my mind when initially writing that simple line of code. It's these thins I wanna talk about. What are the places where one easily forgets to add synchronization code? How do YOU make sure that you added sync code in all important places?

    Read the article

  • Windows Azure - access webrole local storage from separate workerrole

    - by Brett Smith
    I'm running an application on windows azure, the MVC views need to be dynamic, I started by storing them as records in the database, but am quite keen to move them to a physical location. My concept was to create the physical file via code... which worked great and speeds up the page load dramatically. This was of course before I realised that the files were only available for the duration of the role Next I looked at a start up task to create the files when the role was started - however I then realised that any separate instances weren't going to sync up unless I monitored the database for changes. So I moved from a start up task to a function in the run method of the role that checks the database every 10 minutes to see if changes have occurred. The problem is that this seems to choke up the application (at least in the warm up stage). Ideally I would like to move the run function to it's own worker role that can sit there and push files out to web role instances, but I'm unsure on how I would go about accessing the web roles local storage from the worker role. Can anybody tell me whether this is actually possible? and hopefully point me in the right direction to achieve this? Just to clarify what I'm trying to achieve -View is created in user interface running on web role and stored in database -Separate web role (front end) has clientside application with virtualpath provider pointing Views requests to local storage (localresource) -separate worker role to create View structure and load this into clientside web role local storage

    Read the article

  • How to scale a PHP application (servers, mysql, memcache)

    - by Stéphane Goetz
    Hi, I'm currently creating a website for a social project in switzerland. And before there is an overflow of user, I want to prepare the application to scale. I answered by myself many questions but some are left. I explain what I want to do. First at the beginnning, the Application will have only one server (short time) with DNS, PHP, Mysql, Data, and memcache. Second Then I will split them in two DNS, Mysql, memcache Data, PHP Third Here is the problem, I don't know how to do it exactly here to keep the application running well. I could do : Front : Load Balancer, memcache, DNS Web 1 : PHP, DATA Web 2 : PHP, DATA Mysql This would be the scheme, all PHP sessions are kept in the DB. BUT, how do I sync the data? do I run a Rsync to keep them up to date. do I put them on a separate disk (network disk) to be sure ? but in this case, how can I do in case of user uploads ? and if the website gets more success and we have to go on greater structures, would'nt it create some latency on updates ? or would it be a good thing to go directly to amazon's web services ? some infos I use codeigniter as Framework. I use linux as webserver (distribution not chosen now, but should be Debian) Thanks in advance for your answers.

    Read the article

  • How to cope with null results in SQL Tasks that return single rows in SSIS 2005?

    - by JSacksteder
    In a dataflow task, I can slip a rowcount into the processing flow and place the count into a variable. I can later use that variable to conditionally perform some other work if the rowcount was 0. This works well for me, but I have no corresponding strategy for sql tasks expected to return a single row. In that event, I'm returning those values into variables. If the lookup produces no rows, the sql task fails when assigning values into those variables. I can branch on that component failing, but there's a side effect of that - if I'm running the job as a SQL server agent job step, the step returns DTSER_FAILURE, causing the step to fail. I can tell the sql agent to disregard the step failure, but then I won't know if I have a legitimate error in that step. This seems harder than it should be. The only strategy I can think of is to run the same query with a count(*) aggregate and test if that returns a number 0 and if so running the query again without the count. That's ugly because I have the same query in two places that I need to keep in sync. Is there a better way?

    Read the article

  • Synchronizing in SQL Replication works when manually syncing, but not automatically

    - by Dominic Zukiewicz
    I'm using SQL Server 2005 to create a replication copy of the main databases, so that the reports can point to the replication copy instead of locking out our main databases. I have set up the 3 databases as publications and then 3 subscribers moving the transactions over to the subscribers, instantaneously I hope! What seems to be happening is that when using the "Insert Tracer" function, replication take publisher to distributor < 2 seconds, but to replicate to the subscribers can take over 7 minutes (and these are local databases on a SAN). This could be for 2 reasons: The SQL statements used to query the database are obtaining locks which are stopping the transactions updating the subscribers. The subscribers are just too busy for the replication to apply the changes. What seems to trouble me more, is that although the Replication Monitor / Insert Tracer are showing these statistics, if you use the "View Subscription Details" and then click Start, it will sync within seconds. My goal would be to have the data syncing (ideally) continuously, or every minute, perhaps I should reduce the batch size of the transactions? What am I doing wrong? [Note that the -Continuous flag is set!]

    Read the article

  • Images not showing in ie7 using jquery cycle and jCarouselLite plugin

    - by Geetha
    Hi All, I am using jquery cycle and jCarouselLite plugin to display images as slide. Images are getting displayed in ie7. but working perfect in ie6. Image Property inside the cycle control: Protocol: Not available Type: Not available Address(url): Not available Size: Not available Dimensions: 100X100 but control having the url. if i tried that image url separate it showing the image. Code: $('#slide').cycle({ fx: 'fade', continuous: true, speed: 7500, timeout: 55000, sync: 1 }); Html Code: <div id="slide"> <img src="samp1.jpg" width="664" height="428" border="0" /> <img src="samp2.jpg" width="664" height="428" border="0" /> <img src="samp3.jpg" width="664" height="428" border="0" /> <img src="samp4.jpg" width="664" height="428" border="0" /> <img src="samp5.jpg" width="664" height="428" border="0" /> <img src="samp6.jpg" width="664" height="428" border="0" /> <img src="samp7.jpg" width="664" height="428" border="0" /> <img src="samp8.jpg" width="664" height="428" border="0" /> </div> Geetha.

    Read the article

  • Can an application affect TCP retransmits

    - by sipwiz
    I'm troubleshooting some communications issues and in the network traces I am occasionally coming across TCP sequence errors. One example I've got is: Server to Client: Seq=3174, Len=50 Client to Server: Ack=3224 Server to Client: Seq=3224, Len=50 Client to Server: Ack=3224 Server to Client: Seq=3274, Len=10 Client to Server: Ack=3224, SLE=3274, SRE=3284 Packets 4 & 5 are recorded in the trace (which is from a router in between the client and server) at almost exactly the same time so they most likely crossed in transit. The TCP session has got out of sync with the client missing the last two transmissions from the server. Those two packets should have been retransmitted but they weren't, the next log in the trace is a RST packet from the Client 24 seconds after packet 6. My question is related to what could be responsible for the failure to retransmit the server data from packets 3 & 5? I would assume that the retransmit would be at the operating system level but is there anyway the application could influence it and stop it being sent? A thread blocking or put to sleep or something like that?

    Read the article

  • lightweight/portable VCS for server-hopping DBA?

    - by Aaron
    I'm looking for a VCS that'll help me keep all of my work scripts in-sync. Requirements: Portable (as in flash drive, not code-level) Run on Windows XP and Server 2003+ No installation dependencies (Cygwin, perl, Python) I use Mercurial on my work machine for version control of the various T-SQL, ksh, perl, and CMD/BAT scripts that I maintain as a MS SQL Server DBA and Unix sysadmin. So far, hg has worked for my AIX boxes- I mount my home directory as I login, and deal with the repo as if it were local. I haven't been able to find a similar solution for the Windows machines I use. Most of them I do not have Local Admin rights; even if I did, I'd rather not install (and maintain) Python + Mercurial on all of them. I can't get to my home directory on them remotely, which leaves a client running on each machine as the only option. Bonus points for an answer that would let me use a single repo for both the Windows and Unix machines. :) I'm running WinXP, with heavy use of Cygwin and a CrunchBang VM.

    Read the article

  • MSSQL: Views that use SELECT * need to be recreated if the underlying table changes

    - by cbp
    Is there a way to make views that use SELECT * stay in sync with the underlying table. What I have discovered is that if changes are made to the underlying table, from which all columns are to be selected, the view needs to be 'recreated'. This can be achieved simly by running an ALTER VIEW statement. However this can lead to some pretty dangerous situations. If you forgot to recreate the view, it will not be returning the correct data. In fact it can be returning seriously messed up data - with the names of the columns all wrong and out of order. Nothing will pick up that the view is wrong unless you happened to have it covered by a test, or a data integrity check fails. For example, Red Gate SQL Compare doesn't pick up the fact that the view needs to be recreated. To replicate the problem, try these statements: CREATE TABLE Foobar (Bar varchar(20)) CREATE VIEW v_Foobar AS SELECT * FROM Foobar INSERT INTO Foobar (Bar) VALUES ('Hi there') SELECT * FROM v_Foobar ALTER TABLE Foobar ADD Baz varchar(20) SELECT * FROM v_Foobar DROP VIEW v_Foobar DROP TABLE Foobar I am tempted to stop using SELECT * in views, which will be a PITA. Is there a setting somewhere perhaps that could fix this behaviour?

    Read the article

  • Android - Basic CRUD (REST/RPC client) to remote server

    - by bsreekanth
    There are lot of discussion about REST client implementation in android, based on the talk at GoogleIO 2010, by Virgil Dobjanschi. My requirement may not necessarily involved, as I have some freedom to choose the configuration. I only target tablets configuration changes can be prevented if no other easy way (fix at Landscape mode etc) I am trying to achieve. Basic CRUD operation to my server (JSON RPC/ REST). Basically mimic an ajax request from android app (no webview, need native app) Based on the above mentioned talk, and some reading I see these options. Implement any of the 3 mentioned in the Google IO talk Especially, the last pattern may be more suitable as I don't care much of caching. But not sure how "real time" is sync implementation. Use HTTP request in AsyncTask. Simplest, but need to avoid re-sending request during change in device configuration (orientation change etc). Even if I fix at one orientation, recreation of activity still happens. So need to handle it elegantly. Use service to handle http request. So far, it say use service for long ruiing request. Not sure whether it is a good approach for simple GET/POST/PUt requests. Please share your experience on what could be the best way.

    Read the article

  • Oracle - UPSERT with update not executed for unmodified values

    - by Buthrakaur
    I'm using following update or insert Oracle statement at the moment: BEGIN UPDATE DSMS SET SURNAME = :SURNAME, FIRSTNAME = :FIRSTNAME, VALID = :VALID WHERE DSM = :DSM; IF (SQL%ROWCOUNT = 0) THEN INSERT INTO DSMS (DSM, SURNAME, FIRSTNAME, VALID) VALUES (:DSM, :SURNAME, :FIRSTNAME, :VALID); END IF; END; This runs fine except that the update statement performs dummy update if the data is same as the parameter values provided. I would not mind the dummy update in normal situation, but there's a replication/synchronization system build over this table using triggers on tables to capture updated records and executing this statement frequently for many records simply means that I'd cause huge traffic in triggers and the sync system. Is there any simple method how to reformulate this code that the update statement wouldn't update record if not necessary without using following IF-EXISTS check code which I find not sleek enough and maybe also not most efficient for this task? DECLARE CNT NUMBER; BEGIN SELECT COUNT(1) INTO CNT FROM DSMS WHERE DSM = :DSM; IF SQL%FOUND THEN UPDATE DSMS SET SURNAME = :SURNAME, FIRSTNAME = :FIRSTNAME, VALID = :VALID WHERE DSM = :DSM AND (SURNAME != :SURNAME OR FIRSTNAME != :FIRSTNAME OR VALID != :VALID); ELSE INSERT INTO DSMS (DSM, SURNAME, FIRSTNAME, VALID) VALUES (:DSM, :SURNAME, :FIRSTNAME, :VALID); END IF; END;

    Read the article

  • NoSQL for filesystem storage organization and replication?

    - by wheaties
    We've been discussing design of a data warehouse strategy within our group for meeting testing, reproducibility, and data syncing requirements. One of the suggested ideas is to adapt a NoSQL approach using an existing tool rather than try to re-implement a whole lot of the same on a file system. I don't know if a NoSQL approach is even the best approach to what we're trying to accomplish but perhaps if I describe what we need/want you all can help. Most of our files are large, 50+ Gig in size, held in a proprietary, third-party format. We need to be able to access each file by a name/date/source/time/artifact combination. Essentially a key-value pair style look-up. When we query for a file, we don't want to have to load all of it into memory. They're really too large and would swamp our server. We want to be able to somehow get a reference to the file and then use a proprietary, third-party API to ingest portions of it. We want to easily add, remove, and export files from storage. We'd like to set up automatic file replication between two servers (we can write a script for this.) That is, sync the contents of one server with another. We don't need a distributed system where it only appears as if we have one server. We'd like complete replication. We also have other smaller files that have a tree type relationship with the Big files. One file's content will point to the next and so on, and so on. It's not a "spoked wheel," it's a full blown tree. We'd prefer a Python, C or C++ API to work with a system like this but most of us are experienced with a variety of languages. We don't mind as long as it works, gets the job done, and saves us time. What you think? Is there something out there like this?

    Read the article

  • Python: two loops at once

    - by Stephan Meijer
    I've got a problem: I am new to Python and I want to do multiple loops. I want to run a WebSocket client (Autobahn) and I want to run a loop which shows the filed which are edited in a specific folder (pyinotify or else Watchdog). Both are running forever, Great. Is there a way to run them at once and send a message via the WebSocket connection while I'm running the FileSystemWatcher, like with callbacks, multithreading, multiprocessing or just separate files? factory = WebSocketClientFactory("ws://localhost:8888/ws", debug=False) factory.protocol = self.webSocket connectWS(factory) reactor.run() If we run this, it will have success. But if we run this: factory = WebSocketClientFactory("ws://localhost:8888/ws", debug=False) factory.protocol = self.webSocket connectWS(factory) reactor.run() # Websocket client running now,running the filewatcher wm = pyinotify.WatchManager() mask = pyinotify.IN_DELETE | pyinotify.IN_CREATE # watched events class EventHandler(pyinotify.ProcessEvent): def process_IN_CREATE(self, event): print "Creating:", event.pathname def process_IN_DELETE(self, event): print "Removing:", event.pathname handler = EventHandler() notifier = pyinotify.Notifier(wm, handler) wdd = wm.add_watch('/tmp', mask, rec=True) notifier.loop() This will create 2 loops, but since we already have a loop, the code after 'reactor.run()' will not run at all.. For your information: this project is going to be a sync client. Thanks a lot!

    Read the article

  • Jquery cycle plugin and image tag from code behind

    - by Geetha
    Hi All, I am using cycle plugin to show images. I am binding the html code for image from code behind. Problem: Images are not getting displayed. If i hard coded the image tag it is working. Code: $(window).load(function() { $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", data: "{}", url: "Default.aspx/GetImage", dataType: "json", success: function(data) { alert(data.d); $("#sample").html(data.d); $('#sample').cycle({ fx: 'fade', continuous: true, speed: 7500, timeout: 55000, pause: 1, sync: 1 }); } }); }); Data.d will give: <img src="Images/Frontbanner/s0.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s111.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s112.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s113.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s114.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s115.jpg" width="664" height="428" border="0" /> <img src="Images/Frontbanner/s116.jpg" width="664" height="428" border="0" />

    Read the article

  • How do people handle foreign keys on clients when synchronizing to master db

    - by excsm
    Hi, I'm writing an application with offline support. i.e. browser/mobile clients sync commands to the master db every so often. I'm using uuid's on both client and server-side. When synching up to the server, the servre will return a map of local uuids (luid) to server uuids (suid). Upon receiving this map, clients updated their records suid attributes with the appropriate values. However, say a client record, e.g. a todo, has an attribute 'list_id' which holds the foreign key to the todos' list record. I use luids in foreign_keys on clients. However, when that attribute is sent over to the server, it would dirty the server db with luids rather than the suid the server is using. My current solution, is for the master server to keep a record of the mappings of luids to suids (per client id) and for each foreign key in a command, look up the suid for that particular client and use the suid instead. I'm wondering wether others have come across thus problem and if so how they have solved it? Is there a more efficient, simpler way? I took a look at this question "Synchronizing one or more databases with a master database - Foreign keys (5)" and someone seemed to suggest my current solution as one option, composite keys using suids and autoincrementing sequences and another option using -ve ids for client ids and then updating all negative ids with the suids. Both of these other options seem like a lot more work. Thanks, Saimon

    Read the article

  • How do I keep my DataService up to date with ObservableCollection?

    - by joebeazelman
    I have a class called CustomerService which simply reads a collection of customers from a file or creates one and passes it back to the Main Model View where it is turned into an ObservableCollection. What the best practice for making sure the items in the CustomerService and ObservableCollection are in sync. I'm guessing I could hookup the CustomerService object to respond to RaisePropertyChanged, but isn't this only for use with WPF controls? Is there a better way? using System; public class MainModelView { public MainModelView() { _customers = new ObservableCollection<CustomerViewModel>(new CustomerService().GetCustomers()); } public const string CustomersPropertyName = "Customers" private ObservableCollection<CustomerViewModel> _customers; public ObservableCollection<CustomerViewModel> Customers { get { return _customers; } set { if (_customers == value) { return; } var oldValue = _customers; _customers = value; // Update bindings and broadcast change using GalaSoft.MvvmLight.Messenging RaisePropertyChanged(CustomersPropertyName, oldValue, value, true); } } } public class CustomerService { /// <summary> /// Load all persons from file on disk. /// </summary> _customers = new List<CustomerViewModel> { new CustomerViewModel(new Customer("Bob", "" )), new CustomerViewModel(new Customer("Bob 2", "" )), new CustomerViewModel(new Customer("Bob 3", "" )), }; public IEnumerable<LinkViewModel> GetCustomers() { return _customers; } }

    Read the article

  • application trying to connect to mirrored sql db

    - by hp
    Hello, We have 4 web servers that host our asp.net (3.5) application. Randomly, we get error messages like : 1) "Login failed for user 'userid'" 2) "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)" we are running sql2005 and have a principle and a mirror db (sync). When these exceptions are thrown, I look at the SQL error logs on the mirrored db and noticed the failed login messages in there. The principle db is running fine and the other web apps are working great. this will happen for maybe 10 min, then the app pool recycles and it starts hitting the principle db again. Is there a configuration I have incorrect? my theory is that our principle db is forwarding the request to the mirror, but that should never happen. any help??

    Read the article

  • Unable to Get a Correct Time when I am Calling serverTime using jquery.countdown.js + Asp.net ?

    - by user312891
    When i am calling the below function I unable to get a correct Answer. Both var Shortly and newTime having same time one coming from the client site other sync with server. http://keith-wood.name/countdown.html I am waiting from your response. Thanks $(function() { var shortly = new Date('April 9, 2010 20:38:10'); var newTime = new Date('April 9, 2010 20:38:10'); //for loop divid /// $('#defaultCountdown').countdown({ until: shortly, onExpiry: liftOff, onTick: watchCountdown, serverSync: serverTime }); $('#div1').countdown({ until: newTime }); }); function serverTime() { var time = null; $.ajax({ type: "POST", //Page Name (in which the method should be called) and method name url: "Default.aspx/GetTime", // If you want to pass parameter or data to server side function you can try line contentType: "application/json; charset=utf-8", dataType: "json", data: "{}", async: false, //else If you don't want to pass any value to server side function leave the data to blank line below //data: "{}", success: function(msg) { //Got the response from server and render to the client time = new Date(msg.d); alert(time); }, error: function(msg) { time = new Date(); alert('1'); } }); return time; }

    Read the article

  • Database design: Calculating the Account Balance

    - by 001
    How do I design the database to calculate the account balance? 1) Currently I calculate the account balance from the transaction table In my transaction table I have "description" and "amount" etc.. I would then add up all "amount" values and that would work out the user's account balance. I showed this to my friend and he said that is not a good solution, when my database grows its going to slow down???? He said I should create separate table to store the calculated account balance. If did this, I will have to maintain two tables, and its risky, the account balance table could go out of sync. Any suggestion? EDIT: OPTION 2: should I add an extra column to my transaction tables "Balance". now I do not need to go through many rows of data to perform my calculation. Example John buys $100 credit, he debt $60, he then adds $200 credit. Amount $100, Balance $100. Amount -$60, Balance $40. Amount $200, Balance $240.

    Read the article

  • What is the fastest way to filter a list of strings when making an Intellisense/Autocomplete list?

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • Handling duplicate insertion

    - by Francis
    So I've got this piece of code which, logically should work but Entity Framework is behaving unexpectedly. Here: foreach (SomeClass someobject in allObjects) { Supplier supplier = new Supplier(); supplier.primary_key = someobject.id; supplier.name = someobject.displayname; try { sm.Add(supplier); ro.Created++; } catch (Exception ex) { ro.Error++; } } Here's what I have in sm.Add() public Supplier Add(Supplier supplier) { try { _ctx.AddToSupplier(supplier); _ctx.SaveChanges(); return supplier; } catch (Exception ex) { throw; } } I can have records in allObjects that have the same id. My piece of code needs to support this and just move on to the next and try to insert it, which I think should work. If this happens, an exception is throw, saying that records with dupe PKs cannot be inserted (of course). The exception mentions the value of the PK, for example 1000. All is well, a new supplier is passed to sm.Add() containing a PK that's never been used before. (1001) Weirdly though, when doing SaveChanges(), EF will whine about not being able to insert records with dupe PKs. The exception still mentions 1000 even though supplier contains 10001 in primary_key. I feel this is me not using _ctx properly. Do I need to call something else to sync it ?

    Read the article

  • c++ issues with cin.fail() in my program

    - by Wallace
    I want to use input y to do saving thing,and r to do resuming, but then i write it in the following codes,and then I input y or r,I just to be noticed ""Please enter two positve numbers" this line code "if(x==(int)('y'))"and next line is ignored.how could this happen int main(){ cout<<"It's player_"<<player+1<<"'s turn please input a row and col,to save and exit,input 0,resume game input"<<endl; while(true){ cin>>x; if(x==(int)('y')) {save();has_saved=true;break;} if(x==(int)('r')) {resume();has_resumed=true;break;} cin>>y; if(cin.fail()){ cout<<"Please enter two positve numbers"<<endl; cin.clear(); cin.sync();} else if (x>n||x<1||y<1||y>n) { cout<<"your must input a positive number less or equal than "<<n<<endl; continue;} else if(chessboard[x][y]!=' ') {cout<<"Wrong input please try again!"<<endl; continue;} else { chessboard[x][y]=player_symbol[player+1]; break; } } }

    Read the article

  • Should I invest in GraniteDS for Flex + Java development?

    - by Boden
    I'm new to Flex development, and RIAs in general. I've got a CRUD-style Java + Spring + Hibernate service on top of which I'm writing a Flex UI. Currently I'm using BlazeDS. This is an internal application running on a local network. It's become apparent to me that the way RIAs work is more similar to a desktop application than a web application in that we load up the entire model and work with it directly on the client (or at least the portion that we're interested in). This doesn't really jive well with BlazeDS because really it only supports remoting and not data management, thus it can become a lot of extra work to make sure that clients are in sync and to avoid reloading the model which can be large (especially since lazy loading is not possible). So it feels like what I'm left with is a situation where I have to treat my Flex application more like a regular old web application where I do a lot of fine grained loading of data. LiveCycle is too expensive. The free version of WebOrb for Java really only does remoting. Enter GraniteDS. As far as I can determine, it's the only free solution out there that has many of the data management features of LiveCycle. I've started to go through its documentation a bit and suddenly feel like it's yet another quagmire of framework that I'll have to learn just to get an application running. So my question(s) to the StackOverflow audience is: 1) do you recommend GraniteDS, especially if my current Java stack is Spring + Hibernate? 2) at what point do you feel like it starts to pay off? That is, at what level of application complexity do you feel that using GraniteDS really starts to make development that much better? In what ways?

    Read the article

  • Over Optimistic Daily Productivity

    - by Dan Revell
    I'm a junior developer and have been working since I graduated last summer so coming up to a year now. I have this issue that is starting to get to me. Every night I think back to what I did that day, feel bad that I didn't get as much done as I would have liked and then tick off in my head all the things I'll get done the following day. Come the end of the following day I end haven't gotten through half of what I wanted to. This over optimism that I'm suffering from. Might it be just because I'm relatively new to the profession and aren't aware of how long things will actually take me. The work might be quick to think through in my head but all sorts of time sync's involved can bleed away the hours. If not that then perhaps it's the technology stack that I'm working on. SharePoint isn't the easiest thing to develop for and it's certainly something I came into not knowing a whole lot about. If it's because I'm not yet skilled enough to predict how long things will take me, is this trait of over optimistic predictions universal to the profession? I'd appreciate any input from those experienced with working with younger developers and those that might have suffered from this themselves. [EDIT] Perhaps I worded the question badly. I'm interested in just general day to day work rather than overall project completion estimation.

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >