Search Results

Search found 7051 results on 283 pages for 'updates'.

Page 88/283 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Silverlight 3 data-binding child property doesn't update

    - by sonofpirate
    I have a Silverlight control that has my root ViewModel object as it's data source. The ViewModel exposes a list of Cards as well as a SelectedCard property which is bound to a drop-down list at the top of the view. I then have a form of sorts at the bottom that displays the properties of the SelectedCard. My XAML appears as (reduced for simplicity): <StackPanel Orientation="Vertical"> <ComboBox DisplayMemberPath="Name" ItemsSource="{Binding Path=Cards}" SelectedItem="{Binding Path=SelectedCard, Mode=TwoWay}" /> <TextBlock Text="{Binding Path=SelectedCard.Name}" /> <ListBox DisplayMemberPath="Name" ItemsSource="{Binding Path=SelectedCard.PendingTransactions}" /> </StackPanel> I would expect the TextBlock and ListBox to update whenever I select a new item in the ComboBox, but this is not the case. I'm sure it has to do with the fact that the TextBlock and ListBox are actually bound to properties of the SelectedCard so it is listening for property change notifications for the properties on that object. But, I would have thought that data-binding would be smart enough to recognize that the parent object in the binding expression had changed and update the entire binding. It bears noting that the PendingTransactions property (bound to the ListBox) is lazy-loaded. So, the first time I select an item in the ComboBox, I do make the async call and load the list and the UI updates to display the information corresponding to the selected item. However, when I reselect an item, the UI doesn't change! For example, if my original list contains three cards, I select the first card by default. Data-binding does attempt to access the PendingTransactions property on that Card object and updates the ListBox correctly. If I select the second card in the list, the same thing happens and I get the list of PendingTransactions for that card displayed. But, if I select the first card again, nothing changes in my UI! Setting a breakpoint, I am able to confirm that the SelectedCard property is being updated correctly. How can I make this work???

    Read the article

  • Counting elements and reading attributes with .net2.0 ?

    - by Prix
    I have an application that is on .net 2.0 and I am having some difficult with it as I am more use to linq. The xml file look like this: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <updates> <files> <file url="files/filename.ext" checksum="06B9EEA618EEFF53D0E9B97C33C4D3DE3492E086" folder="bin" system="0" size="40448" /> <file url="files/filename.ext" checksum="CA8078D1FDCBD589D3769D293014154B8854D6A9" folder="" system="0" size="216" /> <file url="files/filename.ext" checksum="CA8078D1FDCBD589D3769D293014154B8854D6A9" folder="" system="0" size="216" /> </files> </updates> The file is downloaded and readed on the fly: XmlDocument readXML = new XmlDocument(); readXML.LoadXml(xmlData); Initially i was thinking it would go with something like this: XmlElement root = doc.DocumentElement; XmlNodeList nodes = root.SelectNodes("//files"); foreach (XmlNode node in nodes) { ... im reading it ... } But before reading them I need to know how many they are to use on my progress bar and I am also clueless on how to grab the attribute of the file element in this case. How could I count how many "file" ELEMENTS I have (count them before entering the foreach ofc) and read their attributes ? I need the count because it will be used to update the progress bar. Overall it is not reading my xml very well.

    Read the article

  • AJAX Autosave

    - by antony.trupe
    What's the best javascript library, or plugin or extension to a library, that has implemented autosaving functionality? The specific need is to be able to 'save' a data grid. Think gmail and Google Documents' autosave. I don't want to reinvent the wheel if its already been invented. I'm looking for an existing implementation of the magical autoSave() function. Auto-Saving:pushing to server code that saves to persistent storage, usually a DB. The server code framework is outside the scope of this question. Note that I'm not looking for an Ajax library, but a library/framework a level higher: interacts with the form itself. daemach introduced an implementation on top of jQuery @ http://ideamill.synaptrixgroup.com/?p=3. I'm not convinced it meets the lightweight and well engineered criteria though. Criteria stable, lightweight, well engineered saves onChange and/or onBlur saves no more frequently then a given number of milliseconds handles multiple updates happening at the same time doesn't save if no change has occurred since last save saves to different urls per input class Updates I've stabilized a solution. See my answer below for links.

    Read the article

  • {DCC Warning} W1036 Variable '$frame' might not have been initialized?

    - by Gad D Lord
    Any ideas why I get this warning in Delphi XE: [DCC Warning] Form1.pas(250): W1036 Variable '$frame' might not have been initialized procedure TForm1.Action1Execute(Sender: TObject); var Thread: TThread; begin ... Thread := TThread.CreateAnonymousThread( procedure{Anonymos}() procedure ShowLoading(const Show: Boolean); begin /// <------------- WARNING IS GIVEN FOR THIS LINE (line number 250) Thread.Synchronize(Thread, procedure{Anonymous}() begin ... Button1.Enabled := not Show; ... end ); end; var i: Integer; begin ShowLoading(true); try Thread.Synchronize(Thread, procedure{Anonymous}() begin ... // some UI updates end Thread.Synchronize(Thread, procedure{Anonymous}() begin ... // some UI updates end ); finally ShowLoading(false); end; end ).NameThread('Some Thread Name'); Thread.Start; end; I do not have anywhere in my code a variable names frame nor $frame. I am even not sure how $frame with $ sign can be a valid identifier. Smells like compiler magic to me. PS: Of course the real life xosw is having other than Form1, Button1, Action1 names.

    Read the article

  • Handling file uploads with JavaScript and Google Gears, is there a better solution?

    - by gnarf
    So - I've been using this method of file uploading for a bit, but it seems that Google Gears has poor support for the newer browsers that implement the HTML5 specs. I've heard the word deprecated floating around a few channels, so I'm looking for a replacement that can accomplish the following tasks, and support the new browsers. I can always fall back to gears / standard file POST's but these following items make my process much simpler: Users MUST to be able to select multiple files for uploading in the dialog. I MUST be able to receive status updates on the transmission of a file. (progress bars) I would like to be able to use PUT requests instead of POST I would like to be able to easily attach these events to existing HTML elements using JavaScript. I.E. the File Selection should be triggered on a <button> click. I would like to be able to control response/request parameters easily using JavaScript. I'm not sure if the new HTML5 browsers have support for the desktop/request objects gears uses, or if there is a flash uploader that has these features that I am missing in my google searches. An example of uploading code using gears: // select some files: var desktop = google.gears.factory.create('beta.desktop'); desktop.openFiles(selectFilesCallback); function selectFilesCallback(files) { $.each(files,function(k,file) { // this code actually goes through a queue, and creates some status bars // but it is unimportant to show here... sendFile(file); }); } function sendFile(file) { google.gears.factory.create('beta.httprequest'); request.open('PUT', upl.url); request.setRequestHeader('filename', file.name); request.upload.onprogress = function(e) { // gives me % status updates... allows e.loaded/e.total }; request.onreadystatechange = function() { if (request.readyState == 4) { // completed the upload! } }; request.send(file.blob); return request; } Edit: apparently flash isn't capable of using PUT requests, so I have changed it to a "like" instead of a "must".

    Read the article

  • BeginInvoke on ObservableCollection not immediate.

    - by Padu Merloti
    In my code I subscribe to an event that happens on a different thread. Every time this event happens, I receive an string that is posted to the observable collection: Dispatcher currentDispatcher = Dispatcher.CurrentDispatcher; var SerialLog = new ObservableCollection<string>(); private void hitStation_RawCommandSent(object sender, StringEventArgs e) { string command = e.Value.Replace("\r\n", ""); Action dispatchAction = () => SerialLog.Add(command); currentDispatcher.BeginInvoke(dispatchAction, DispatcherPriority.Render); } The code below is in my view model (could be in the code behind, it doesn't matter in this case). When I call "hitstation.PrepareHit", the event above gets called a couple times, then I wait and call "hitStation.HitBall", and the event above gets called a couple more times. private void HitBall() { try { try { Mouse.OverrideCursor = Cursors.Wait; //prepare hit hitStation.PrepareHit(hitSpeed); Thread.Wait(1000); PlayWarning(); //hit hitStation.HitBall(hitSpeed); } catch (TimeoutException ex) { MessageBox.Show("Timeout hitting ball: " + ex.Message); } } finally { Mouse.OverrideCursor = null; } } The problem I'm having is that the ListBox that is bound to my SerialLog gets updated only when the HitBall method finishes. I was expecting seeing a bunch of updates from the PrepareHit, a pause and then a bunch more updates from the HitBall. I've tried a couple of DispatcherPriority arguments, but they don't seem to have any effect.

    Read the article

  • Windows FTP batch sript to read & dl from external user list

    - by Will Sims
    i have several old, unused batches that i'm redoing.. I have a batch file for an old network arch from several years ago.. the main thing I'd like it to do now is read a list of files.. I'll explain the setup.. Server updates a complete list [CurrentMediaStores.txt] 2x a day. The laptops can set settings to DL this list through their start.bat which also runs addins and updates I aply to my pc's, to give my batches and myself a break from slavish folder assignments and add a lil more dynamics and less adminin the bats now call on a list the user makes by simply copying a line from the CMS.txt file and pasting it into their [Grab_List.txt] My problem is though I have the branch :: off right now and the code that detects if LAN is connected or not to switch to an ftp connection. I'd like for the ftp batch to call/ use the Grab_List also. but I just can't/ don't know how to pass and do the for loop with a ftp session to loop through x amount of files in the users req list.. Anyhelp would be greatly appreciated

    Read the article

  • linq: SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • LINQ to Sql: Insert instead of Update

    - by Christina Mayers
    I am stuck with this problems for a long time now. Everything I try to do is insert a row in my DB if it's new information - if not update the existing one. I've updated many entities in my life before - but what's wrong with this code is beyond me (probably something pretty basic) I guess I can't see the wood for the trees... private Models.databaseDataContext db = new Models.databaseDataContext(); internal void StoreInformations(IEnumerable<EntityType> iEnumerable) { foreach (EntityType item in iEnumerable) { EntityType type = db.EntityType.Where(t => t.Room == item.Room).FirstOrDefault(); if (type == null) { db.EntityType.InsertOnSubmit(item); } else { type.Date = item.Date; type.LastUpdate = DateTime.Now(); type.End = item.End; } } } internal void Save() { db.SubmitChanges(); } Edit: just checked the ChangeSet, there are no updates only inserts. For now I've settled with foreach (EntityType item in iEnumerable) { EntityType type = db.EntityType.Where(t => t.Room == item.Room).FirstOrDefault(); if (type != null) { db.Exams.DeleteOnSubmit(type); } db.EntityType.InsertOnSubmit(item); } but I'd love to do updates and lose these unnecessary delete statements.

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Looking for a very simple file-based CMS

    - by nfm
    I'm building a site for a friend for free, and am trying to work out a good way for her to be able to easily make updates. I haven't used any CMSs before. I was browsing the web today looking at some, and they all seem way too complicated for what I'm after. Basically, all I want is a really simple CMS that pulls together HTML snippets in particular subdirectories, and wraps them in header/footer HTML and inserts them into a template page in the appropriate section. I'm imagining a site layout something like this: / /index.php /blog_template.php /news_template.php /blog/ /blog/header.php /blog/footer.php /blog/my-first-blog.html /blog/blogs-rule.html /blog/... Say index.php contains div#blog. PHP would wrap each /blog/*.html file in /blog/header.php and blog/footer.php, and insert them into the div#blog as div#blog([0-9]*). I haven't been able to find anything this basic, and am one step away from throwing something together myself, but I'm a bit short on time at the moment and figured I'd post here first. Has anyone come across something like this? I don't want any DB, extensions, user accounts, installation, config, updates... just a simple file based solution. Thanks :) Forgot to mention - needs to be FOSS and run on Linux!

    Read the article

  • What is the right pattern for a async data fetching method in .net async/await

    - by s093294
    Given a class with a method GetData. A few other clients call GetData, and instead of it fetching data each time, i would like to create a pattern where the first call starts the task to get the data, and the rest of the calls wait for the task to complete. private Task<string> _data; private async Task<string> _getdata() { return "my random data from the net"; //get_data_from_net() } public string GetData() { if(_data==null) _data=_getdata(); _data.wait(); //are there not a problem here. cant wait a task that is already completed ? if(_data.status != rantocompletion) _data.wait() is not any better, it might complete between the check and the _data.wait? return _data.Result; } How would i do the pattern correctly? (Solution) private static object _servertime_lock = new object(); private static Task<string> _servertime; private static async Task<string> servertime() { try { var thetvdb = new HttpClient(); thetvdb.Timeout = TimeSpan.FromSeconds(5); // var st = await thetvdb.GetStreamAsync("http://www.thetvdb.com/api/Updates.php?type=none"); var response = await thetvdb.GetAsync("http://www.thetvdb.com/api/Updates.php?type=none"); response.EnsureSuccessStatusCode(); Stream stream = await response.Content.ReadAsStreamAsync(); XDocument xdoc = XDocument.Load(stream); return xdoc.Descendants("Time").First().Value; } catch { return null; } } public static async Task<string> GetServerTime() { lock (_servertime_lock) { if (_servertime == null) _servertime = servertime(); } var time = await _servertime; if (time == null) _servertime = null; return time; }

    Read the article

  • LinqToSQL not updating database

    - by codegarten
    Hi. I created a database and dbml in visual studio 2010 using its wizards. Everything was working fine until i checked the tables data (also in visual studio server explorer) and none of my updates were there. using (var context = new CenasDataContext()) { context.Log = Console.Out; context.Cenas.InsertOnSubmit(new Cena() { id = 1}); context.SubmitChanges(); } This is the code i am using to update my database. At this point my database has one table with one field (PK) named ID. *INSERT INTO [dbo].Cenas VALUES (@p0) -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [1] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.1* This is LOG from the execution (printed the context log into the console). The problem i'm having is that these updates are not persistent in the database. I mean that when i query my database (visual studio server explorer - new query) i see the table is empty, every time. I am using a SQL Server database file (.mdf).

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • How can I share an entity framework model across website users

    - by richardmoss
    Hello, Currently my website is based around MVC and the Entity Framework running against a SQL Server 2005 database. So far, it has all been running very smoothly, and I really enjoy MVC and its slimmer more concise code (and no huge viewstates or soul destroying postbacks ;)) Recently I was working on upgrading the site to use a simple forum system, and this is where I started running into problems. When I was testing the site using two different browsers, if I created or replied to a post in one browser, the other browser couldn't see the post. At the moment, each visitor to the site gets their own copy of the entity model, which I store in their session data. Obviously this is the problem as updates to one model aren't getting carried to the other. As a test, I tried storing a single copy of the model which all visitors would access by assigning the model to a static variable. This worked, and both browsers could see each others modifications. However, it had its side effects. For example, if I fired up both browsers at the same time and the model was initialized, one browser would crash, and the other would work fine, despite me using a locking object so in theory one of them should have been delayed until the model was ready (of course I could have implemented this wrong ;)). Also, originally this site did use one model for all visitors and when it was live, it frequently shut down - killing the IIS application pool while it did. Now I'm not sure if this was related, but I don't really want to reintroduce whatever bug I had that caused this shut down. So, my question is a simple one really - what is the best way of either using the same model for all website users so they all see updates, or if they do have separate copies (which I imagine will have a performance impact in time) how can the models detect changes in the database and update themselves according. Thanks in advance for any advice! Regards; Richard Moss

    Read the article

  • jQuery carousel clicks update <select> list's "selected" option to match clicked item's title attrib

    - by Scott B
    The code below allows me to change a preview image outside the carousel widget so that it matches the element under the mouse. For example, if the user mouses over image2's thumbnail, the script updates .selectedImage so that it displays image2's full size version. I'd like to enhance it so that the #myThumbs options listing updates its "selected" option to match the carousel image that receives a click. Mouseover changes the preview image and click changes the select list to match the same name. The items in the carousel will have the same title attribute as the items in the select list, so I would expect that I can pass that value from the carousel to the select list. $(function() { $("#carousel").jCarouselLite({ btnNext: ".next", btnPrev: ".prev", visible: 6, mouseWheel: true, speed: 700 }); $('#carousel').show(); $('#carousel ul li').hover(function(e) { var img_src = $(this).children('img').attr('src'); $('.selectedImage img').attr('src',img_src); } ,function() { $('.selectedImage img').attr('src', '<?php echo $selectedThumb; ?>');}); }); <select id="myThumbs"> <option>image1</option> <option selected="selected">image2</option> <option>image3</option> </select>

    Read the article

  • Get the last checked checkboxes...

    - by Sara
    Hi everyone, I'm not sure how to accomplish this issue which has been confusing me for a few days. I have a form that updates a user record in MySQL when a checkbox is checked. Now, this is how my form does this: if (isset($_POST['Update'])) { $paymentr = $_POST['paymentr']; //put checkboxes array into variable $paymentr2 = implode(', ', $paymentr); //implode array for mysql $query = "UPDATE transactions SET paymentreceived=NULL"; $result = mysql_query($query); $query = "UPDATE transactions SET paymentdate='0000-00-00'"; $result = mysql_query($query); $query = "UPDATE transactions SET paymentreceived='Yes' WHERE id IN ($paymentr2)"; $result = mysql_query($query); $query = "UPDATE transactions SET paymentdate=NOW() WHERE id IN ($paymentr2)"; $result = mysql_query($query); foreach ($paymentr as $v) { //should collect last updated records and put them into variable for emailing. $query = "SELECT id, refid, affid FROM transactions WHERE id = '$v'"; $result = mysql_query($query) or die("Query Failed: ".mysql_errno()." - ".mysql_error()."<BR>\n$query<BR>\n"); $trans = mysql_fetch_array($result, MYSQL_ASSOC); $transactions .= '<br>User ID:'.$trans['id'].' -- '.$trans['refid'].' -- '.$trans['affid'].'<br>'; } } Unfortunately, it then updates ALL the user records with the latest date which is not what I want it to do. The alternative I thought of was, via Javascript, giving the checkbox a value that would be dynamically updated when the user selected it. Then, only THOSE checkboxes would be put into the array. Is this possible? Is there a better solution? I'm not even sure I could wrap my brain around how to do that WITH Javascript. Does the answer perhaps lie in how my mysql code is written? Thanks - I sincerely appreciate it!!!

    Read the article

  • Modify SQL result set before returning from stored procedure

    - by m0sa
    I have a simple table in my SQL Server 2008 DB: Tasks_Table -id -task_complete -task_active -column_1 -.. -column_N The table stores instructions for uncompleted tasks that have to be executed by a service. I want to be able to scale my system in future. Until now only 1 service on 1 computer read from the table. I have a stored procedure, that selects all uncompleted and inactive tasks. As the service begins to process tasks it updates the task_active flag in all the returned rows. To enable scaleing of the system I want to enable deployment of the service on more machines. Because I want to prevent a task being returned to more than 1 service I have to update the stored procedure that returns uncompleted and inactive tasks. I figured that i have to lock the table (only 1 reader at a time - I know I have to use an apropriate ISOLATION LEVEL), and updates the task_active flag in each row of the result set before returning the result set. So my question is how to modify the SELECT result set iin the stored procedure before returning it?

    Read the article

  • MSBuild: automate collecting of db migration scripts?

    - by P Dub
    Summary of environment. Asp.net web application (source stored in svn) sqlserver database. (Database schema (tables/sprocs) stored in svn) db version is synced with web application assembly version. (stored in table 'CurrentVersion') CI hudson server that checks out web app from repo and runs custom msbuild file to publish/package app. My msbuild script updates the assembly version of the web app (Major.Minor.Revision.Build) on each build. The 'Revision' is set to the currently checked out svn revision and the 'Build' to the hudson build number (incremented on each automated build). This way i can match the app to a specific trunk revision also get other build stats from the hudson build number. I'd like to automate the collecting of migration scripts (updated sprocs etc) to add to the zip package. I guess by comparing the svn revision of the db that has yet to be deployed to, to the revision being deployed, i can find what db files have changed in the trunk since the last deployment to that database/environment. This could easily be achieved by manually calling the svn diff -r REVNO:REVNO command to list changed .sql files. These files could then manually have to be added to the package. It would be great if this could be automated. Firstly i'd imagine I'll have to write a custom task to check the version of the db that has yet to be deployed to. After that I'm quite unsure. Does anyone have any suggestion on how this would be achieved through an msbuild task either existing or custom? Finally I'll have to autogen a script to add to the package that updates the database version table so as to be in sync with the application.

    Read the article

  • How to update a Widget dynamically (Not waiting 30 min for onUpdate to be called)?

    - by Donal Rafferty
    I am currently learning about widgets in Android. I want to create a WIFI widget that will display the SSID, the RSSI (Signal) level. But I also want to be able to send it data from a service I am running that calculates the Quality of Sound over wifi. Here is what I have after some reading and a quick tutorial: public class WlanWidget extends AppWidgetProvider{ RemoteViews remoteViews; AppWidgetManager appWidgetManager; ComponentName thisWidget; WifiManager wifiManager; public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { Timer timer = new Timer(); timer.scheduleAtFixedRate(new WlanTimer(context, appWidgetManager), 1, 10000); } private class WlanTimer extends TimerTask{ RemoteViews remoteViews; AppWidgetManager appWidgetManager; ComponentName thisWidget; public WlanTimer(Context context, AppWidgetManager appWidgetManager) { this.appWidgetManager = appWidgetManager; remoteViews = new RemoteViews(context.getPackageName(), R.layout.widget); thisWidget = new ComponentName(context, WlanWidget.class); wifiManager = (WifiManager)context.getSystemService(Context.WIFI_SERVICE); } @Override public void run() { remoteViews.setTextViewText(R.id.widget_textview, wifiManager.getConnectionInfo().getSSID()); appWidgetManager.updateAppWidget(thisWidget, remoteViews); } } The above seems to work ok, it updates the SSID on the widget every 10 seconds. However what is the most efficent way to get the information from my service that will be already running to update periodically on my widget? Also is there a better approach to updating the the widget rather than using a timer and timertask? (Avoid polling) UPDATE As per Karan's suggestion I have added the following code in my Service: RemoteViews remoteViews = new RemoteViews(context.getPackageName(), R.layout.widget); ComponentName thisWidget = new ComponentName( context, WlanWidget.class ); remoteViews.setTextViewText(R.id.widget_QCLevel, " " + qcPercentage); AppWidgetManager.getInstance( context ).updateAppWidget( thisWidget, remoteViews ); This gets run everytime the RSSI level changes but it still never updates the TextView on my widget, any ideas why?

    Read the article

  • Drupal view filter to show only one of a certain item

    - by Joel
    I'm fairly new to Drupal, and am using Node Import to take a TSV file and turn it into nodes. I'm hitting a problem, though, with automating updates to the nodes. Again, I'd like to take a Tab Separated Values text file, and load it into my site via Node Import (or whatever else anyone might suggest) and then only show updated Nodes. Here's a specific example: I have a Node with the following info: StoreId Name Address Phone Contact 01 Name1 Address1 Phone1 Contact1 02 Name2 Address2 Phone2 Contact2 etc. The info pulls into the nodes just fine (Thank you Node Import!), but we also want to process updates to the nodes. So far I have two ideas... figure out how to delete duplicate (previous) instances of the same StoreID, or just save the node with the duplicate StoreID (and new other info) and just display the most current version. In Views, I can get it to show the nodes and everything, but I can't figure out how to only display the most recent version of each StoreID. A view of views would work, but I can't seem to get that to work, either. Any ideas or other approaches I could take? Thanks in advance for the help!

    Read the article

  • Installing a group of .deb files in Ubuntu

    - by p00ya
    Hi. I have a directory of .deb files which I copied from the cache folder of apt. There are many applications and Ubuntu updates among them. but there's no dependency failure because they were all downloaded by 'add/remove applications' and 'update manager' automatically. Now I have installed the same version of Ubuntu (9.04) and I want to install those apps and updates again(though they are not new versions). In other words, I want to make this fresh Ubuntu install exactly like the old one but without downloading any thing and using only those .deb files that I copied. All I have is an archive folder containing the .deb files and a 'pkgcache.bin' file. I know I can double-click the .deb files and install them manually but then I have to find out and follow the dependencies one by one from the installer errors. I have also tried adding an offline repository but it didn't work. I think because all of my .deb's are in on folder, and there is no separate 'main', 'restricted', ... folder?! Is there a way to do all of this automatically? thanks

    Read the article

  • NHibernate will not insert a record

    - by Brian Beckham
    I have an application that is now 4+ years old that is exhibiting some odd behavior on our latest deployment. The application uses nHibernate for all inserts / updates / selects, etc. We are currently using .NET 2.0, and nHibernate 1.2 (I know, we need to upgrade) This deployment is on Windows 2008 Server x64, IIS 7.5 - what I have seen so far is that the application runs, but is unable to insert or update records in the DB - reads seem fine so far, but writes are a problem. SOME writes actually work, inserts into some small tables, but most never even make it to the DB. Using SQL Profiler, the insert / updates never make it to the server, and turning log4net up to DEBUG, and show_sql true - the select statements appear, but the insert / update statements never make it into the log at all, and never show up at the server. What's even more odd is that the application seems to be oblivious to this - the commandandclose runs without exception (open session in view with an httpmodule), the domain objects come back with uuid's generated, etc. but never get persisted. Certainly an upgrade is due, but I would hate to try it during a deployment, and without time to accurately test the app. Any ideas?

    Read the article

  • Scheduling a Delay Job on Heroku with a Worker Dyno

    - by user1524775
    I'm currently using Heroku's scheduler to run a script. However, the time that the script takes to run is going to increase from a few milliseconds to a few minutes. I'm looking at using the delayed_job gem to push this process off to a Worker Dyno. I want to continue to run this script once-a-day, just offload it to the worker. My current rake task is: desc "This task updates some stuff for you." task :update_some_stuff => :environment do puts "Updating some stuff ..." SomeClass.new.process puts "... done." end Once the gem is installed, migration run, and worker dyno started, will the script just need to change to: desc "This task updates some stuff for you." task :update_some_stuff => :environment do puts "Updating some stuff ..." SomeClass.new.delay.process puts "... done." end With this task still being a rake task scheduled by Heroku's Scheduler, is the only thing that needs to happen here the introduction of the delay method to put this in the Worker's queue? Thanks in advance for any help.

    Read the article

  • Battery drains even with app off screen, could it be Location Services doing it?

    - by John Jorsett
    I run my app, which uses GPS and Bluetooth, then hit the back button so it goes off screen. I verified via LogCat that the app's onDestroy was called. OnDestroy removes the location listeners and shuts down my app's Bluetooth service. I look at the phone 8 hours later and half the battery charge has been consumed, and my app was responsible according the phone's Battery Use screen. If I use the phone's Settings menu to Force Stop the app, this doesn't occur. So my question is: do I need to do something more than remove the listeners to stop Location Services from consuming power? That's the only thing I can think of that would be draining the battery to that degree when the app is supposedly dormant. Here's my onStart() where I turn on the location-related stuff and Bluetooth: @Override public void onStart() { super.onStart(); if(D_GEN) Log.d(TAG, "MainActivity onStart, adding location listeners"); // If BT is not on, request that it be enabled. // setupBluetooth() will then be called during onActivityResult if (!mBluetoothAdapter.isEnabled()) { Intent enableIntent = new Intent(BluetoothAdapter.ACTION_REQUEST_ENABLE); startActivityForResult(enableIntent, REQUEST_ENABLE_BT); // Otherwise, setup the Bluetooth session } else { if (mBluetoothService == null) setupBluetooth(); } // Define listeners that respond to location updates mLocationManager = (LocationManager) this.getSystemService(Context.LOCATION_SERVICE); mLocationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, GPS_UPDATE_INTERVAL, 0, this); mLocationManager.addGpsStatusListener(this); mLocationManager.addNmeaListener(this); } And here's my onDestroy() where I remove them: public void onDestroy() { super.onDestroy(); if(D_GEN) Log.d(TAG, "MainActivity onDestroy, removing update listeners"); // Remove the location updates if(mLocationManager != null) { mLocationManager.removeUpdates(this); mLocationManager.removeGpsStatusListener(this); mLocationManager.removeNmeaListener(this); } if(D_GEN) Log.d(TAG, "MainActivity onDestroy, finished removing update listeners"); if(D_GEN) Log.d(TAG, "MainActivity onDestroy, stopping Bluetooth"); stopBluetooth(); if(D_GEN) Log.d(TAG, "MainActivity onDestroy finished"); }

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >