Search Results

Search found 743 results on 30 pages for 'fetching'.

Page 1/30 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Fetching real time data from excel

    - by Umesh Sharma
    I am seriouly looking for your valuable help first time here. If possible, plese help me. I am developing a VB.NET app in which i read "real time data" from a excel sheet using "Microsoft.Office.Interop.Excel" i.e. excel automation. All cells in excel sheet are fetching stock data from some LOCAL DDE Server like "=XYZ|Bid!GOLD", "=XYZ|Bid!SILVER", "=XYZ|Ask!SILVER" and so on... Some cells also having fixed values like "Symbol", "Bid Rate", "32.90" etc. Values of DDE mapped cells (i.e. =XYZ|xxxx!yyy) are continuously changing. THE PROBLEM is here..."FIXED values" from excel cells are coming quite ok to my app but all DDE mapped cells values are coming "-2146826246" (When datasource local dde server ON) or "-2146826265" (OFF). Although, if i use C#.NET, it's all ok but not with Vb.NET. I want to display range of excel (A1 to J50) into VB.NET ListView which are changing in every 200ms (5 times in every 1 second) ================ Important ====================================================== Is it possible to BIND "listview items/columns values" with "excel cells" or some local memory variables ?? Currently, i am reading excel "cell by cell" and trying to put values in .NET listview but CPU USES are very high as well as it's toooo slow process. If yes, then how please ? I am a VFP developer but new to .NET It's very easy in VFP then why not in .NET ?? Please guide me, if someone has the solution...

    Read the article

  • actionscript/flex : fetching assets from a CDN

    - by dta
    I have a peculiar problem. My main.swf loads symbols from other swfs at runtime. If I keep all the swfs(main and others) on my server, things work fine. But if I keep all of them on a CDN, one particular symbol won't display occassionally. With flash tracer plugin for firefox I am able to see that all the symbols have been loaded and their z-indices are as they should be. What could possibly be an issue?

    Read the article

  • Optimising news fetching

    - by aceBox
    I have a web scraper for scraping news from different sources in wp7. My current appraoch for doing this is: load newspapers information from xml file. go to the specified sections and fetch the urls of the news items. go to each url and fetch headline, image, publisher. display using a MVVM architecture of windows phone. The whole thing takes place asynchronously...meaning as soon as url from a section of a newspaper is fetched it is added to the queue, and the second stage consisting of fetching headline, image etc starts... and as soon this is fetched even for one article, it is displayed. Later on as more articles are fetched, they are added on to the list. For the fetching purpose I am using a SmartThreadPool(http://www.codeproject.com/Articles/7933/Smart-Thread-Pool) for windows phone. My problem is that...even for fetching around 80 items (in total) from 9 publications, it is taking more than a minute. How can i speed up the procedure? Note: I have a two stage approach because many times the images are not available with headlines, and are only found in the article.

    Read the article

  • Fetching database query through function

    - by Shubham Maurya
    I am sick of connecting database in each script i need a more OOP approach to fetching database results. ex like wordpress use wpdb class to fetch results. This what wordpress does to get data <?php $posts = $wpdb->get_results("SELECT ID, post_title FROM $wpdb->posts WHERE post_status = 'publish' AND post_type='post' ORDER BY comment_count DESC LIMIT 0,4") ?> How can i create the same feature too using any class or function and use it in my script Thank you

    Read the article

  • Methodology behind fetching large XML data sets in pieces

    - by Jerry Dodge
    I am working on an HTTP Server in Delphi which simply sends back a custom XML dataset. I am not following any type of standard formatting, such as SOAP. I have the system working seamlessly, except one small flaw: When I have a very large dataset to send back to the client, it might take up to 2 minutes for all the data to be transferred. The HTTP Server I'm building is essentially an XML Data based API around a database, implementing the common business rule - therefore, the requests are specific to the data behind the system. When, for example, I fetch a large set of product data, I would like to break this down and send it back piece by piece. However, a single HTTP request calls for a single response. I can't necessarily keep feeding the client with multiple different XML packets unless the client explicitly requests it. I don't have any session management, but rather an API Key. I know if I had sessions, I could keep-alive a dataset temporarily for a client, and they could request bits and pieces of it. However, without session management, I would have to execute the SQL query multiple times (for each chunk of data), and in the mean-time, if that data changes, the "pages" might get messed up, therefore causing items to show on the wrong pages, after navigating to a different page. So how is this commonly handled? What's the methodology behind breaking down a large XML dataset into chunks to save the load?

    Read the article

  • Fetching Data from Multiple Tables using Joins

    Applying normalization to relational databases tends to promote better accuracy of queries, but it also leads to queries that take a little more work to develop, as the data may be spread amongst several tables. In today's article, we'll learn how to fetch data from multiple tables by using joins.

    Read the article

  • Oracle SQL Developer: Fetching SQL Statement Result Sets

    - by thatjeffsmith
    Running queries, browsing tables – you are often faced with many thousands, if not millions, of rows. Most people are happy with looking at the first few rows. But occasionally you need to see more. SQL Developer doesn’t show you all records, all at once. Instead, it brings the records down in ‘chunks,’ or as-needed. How It Works There is a preference that tells SQL Developer how many records to get in a single request, or ‘fetch’ of records. The default is 50… So if I run a query that returns MORE than 50 rows: There’s more than 50 records in this resultset, but we have 50 in the grid to start with. We don’t know how many records are in this result set actually. To show the record count here, we actually go physically query the database with a row count type query. All we know is that the query has finished executing, and that there are rows available to go fetch. It tells us when it’s done. As you scroll through the grid, if you get to record 50 and scroll more, we’ll get 50 more records. Or, you can cheat to get to the ‘bottom’ of the result set. You can ask SQL Developer to just to get all the records at once… Once all the records have been fetched, you’ll see this: All rows fetched! A word of caution There’s a reason we have the default set to 50 and not 1000. Bringing back data can get expensive and heavy. We’ve found the best performance to be found in that 50 to 200 record range.

    Read the article

  • Fetching Latitude and Longitude Co-ordinates for Addresses using PowerShell

    - by Rob Farley
    Regular readers of my blog (at sqlblog.com – please let me know if you’re reading this elsewhere) may be aware that I’ve been doing more and more with spatial data recently. With the now-available SQL Server 2008 R2 Reporting Services including maps, it’s a topic that interests many people. Interestingly though, although many people have plenty of addresses in their various databases (whether they be CRM systems, HR systems or whatever), my experience shows that many people do not store the latitude...(read more)

    Read the article

  • Fetching Partition Information

    - by Mike Femenella
    For a recent SSIS package at work I needed to determine the distinct values in a partition, the number of rows in each partition and the file group name on which each partition resided in order to come up with a grouping mechanism. Of course sys.partitions comes to mind for some of that but there are a few other tables you need to link to in order to grab the information required. The table I’m working on contains 8.8 billion rows. Finding the distinct partition keys from this table was not a fast operation. My original solution was to create  a temporary table, grab the distinct values for the partitioned column, then update via sys.partitions for the rows and the $partition function for the partitionid and finally look back to the sys.filegroups table for the filegroup names. It wasn’t pretty, it could take up to 15 minutes to return the results. The primary issue is pulling distinct values from the table. Queries for distinct against 8.8 billion rows don’t go quickly. A few beers into a conversation with a friend and we ended up talking about work which led to a conversation about the task described above. The solution was already built in SQL Server, just needed to pull it together. The first table I needed was sys.partition_range_values. This contains one row for each range boundary value for a partition function. In my case I have a partition function which uses dayid values. For example July 4th would be represented as an int, 20130704. This table lists out all of the dayid values which were defined in the function. This eliminated the need to query my source table for distinct dayid values, everything I needed was already built in here for me. The only caveat was that in my SSIS package I needed to create a bucket for any dayid values that were out of bounds for my function. For example if my function handled 20130501 through 20130704 and I had day values of 20130401 or 20130705 in my table, these would not be listed in sys.partition_range_values. I just created an “everything else” bucket in my ssis package just in case I had any dayid values unaccounted for. To get the number of rows for a partition is very easy. The sys.partitions table contains values for each partition. Easy enough to achieve by querying for the object_id and index value of 1 (the clustered index) The final piece of information was the filegroup name. There are 2 options available to get the filegroup name, sys.data_spaces or sys.filegroups. For my query I chose sys.filegroups but really it’s a matter of preference and data needs. In order to bridge between sys.partitions table and either sys.data_spaces or sys.filegroups you need to get the container_id. This can be done by joining sys.allocation_units.container_id to the sys.partitions.hobt_id. sys.allocation_units contains the field data_space_id which then lets you join in either sys.data_spaces or sys.file_groups. The end result is the query below, which typically executes for me in under 1 second. I’ve included the join to sys.filegroups and to sys.dataspaces, and I’ve  just commented out the join sys.filegroups. As I mentioned above, this shaves a good 10-15 minutes off of my original ssis package and is a really easy tweak to get a boost in my ETL time. Enjoy.

    Read the article

  • Fetching Latitude and Longitude Co-ordinates for Addresses using PowerShell

    - by Rob Farley
    Regular readers of my blog (at sqlblog.com – please let me know if you’re reading this elsewhere) may be aware that I’ve been doing more and more with spatial data recently. With the now-available SQL Server 2008 R2 Reporting Services including maps, it’s a topic that interests many people. Interestingly though, although many people have plenty of addresses in their various databases (whether they be CRM systems, HR systems or whatever), my experience shows that many people do not store the latitude...(read more)

    Read the article

  • Portable class libraries and fetching JSON

    - by Jeff
    After much delay, we finally have the Windows Phone 8 SDK to go along with the Windows 8 Store SDK, or whatever ridiculous name they’re giving it these days. (Seriously… that no one could come up with a suitable replacement for “metro” is disappointing in an otherwise exciting set of product launches.) One of the neat-o things is the potential for code reuse, particularly across Windows 8 and Windows Phone 8 apps. This is accomplished in part with portable class libraries, which allow you to share code between different types of projects. With some other techniques and quasi-hacks, you can share some amount of code, and I saw it mentioned in one of the Build videos that they’re seeing as much as 70% code reuse. Not bad. However, I’ve already hit a super annoying snag. It appears that the HttpClient class, with its idiot-proof async goodness, is not included in the Windows Phone 8 class libraries. Shock, gasp, horror, disappointment, etc. The delay in releasing it already caused dismay among developers, and I’m sure this won’t help. So I started refactoring some code I already had for a Windows 8 Store app (ugh) to accommodate the use of HttpWebRequest instead. I haven’t tried it in a Windows Phone 8 project beyond compiling, but it appears to work. I used this StackOverflow answer as a starting point since it’s been a long time since I used HttpWebRequest, and keep in mind that it has no exception handling. It needs refinement. The goal here is to new up the client, and call a method that returns some deserialized JSON objects from the Intertubes. Adding facilities for headers or cookies is probably a good next step. You need to use NuGet for a Json.NET reference. So here’s the start: using System.Net; using System.Threading.Tasks; using Newtonsoft.Json; using System.IO; namespace MahProject {     public class ServiceClient<T> where T : class     {         public ServiceClient(string url)         {             _url = url;         }         private readonly string _url;         public async Task<T> GetResult()         {             var response = await MakeAsyncRequest(_url);             var result = JsonConvert.DeserializeObject<T>(response);             return result;         }         public static Task<string> MakeAsyncRequest(string url)         {             var request = (HttpWebRequest)WebRequest.Create(url);             request.ContentType = "application/json";             Task<WebResponse> task = Task.Factory.FromAsync(                 request.BeginGetResponse,                 asyncResult => request.EndGetResponse(asyncResult),                 null);             return task.ContinueWith(t => ReadStreamFromResponse(t.Result));         }         private static string ReadStreamFromResponse(WebResponse response)         {             using (var responseStream = response.GetResponseStream())                 using (var reader = new StreamReader(responseStream))                 {                     var content = reader.ReadToEnd();                     return content;                 }         }     } } Calling it in some kind of repository class may look like this, if you wanted to return an array of Park objects (Park model class omitted because it doesn’t matter): public class ParkRepo {     public async Task<Park[]> GetAllParks()     {         var client = new ServiceClient<Park[]>(http://superfoo/endpoint);         return await client.GetResult();     } } And then from inside your WP8 or W8S app (see what I did there?), when you load state or do some kind of UI event handler (making sure the method uses the async keyword): var parkRepo = new ParkRepo(); var results = await parkRepo.GetAllParks(); // bind results to some UI or observable collection or something Hopefully this saves you a little time.

    Read the article

  • What Pattern will solve this - fetching dependent record from database

    - by tunmise fasipe
    I have these classes class Match { int MatchID, int TeamID, //used to reference Team ... other fields } Note: Match actually have 2 teams which means 2 TeamID class Team { int TeamID, string TeamName } In my view I need to display List<Match> showing the TeamName. So I added another field class Match { int MatchID, int TeamID, //used to reference Team ... other fields string TeamName; } I can now do Match m = getMatch(id); m.TeamName = getTeamName(m.TeamId); //get name from database But for a List<Match>, getTeamName(TeamId) will go to the database to fetch TeamName for each TeamID. For a page of 10 Matches per page, that could be (10x2Teams)=20 trip to database. To avoid this, I had the idea of loading everything once, store it in memory and only lookup the TeamName in memory. This made me have a rethink that what if the records are 5000 or more. What pattern is used to solve this and how?

    Read the article

  • problems in fetching upgrades

    - by andre
    presently using ubuntu 10.04 and I would like to upgrade to the latest ubuntu release but while trying to ugrade I have this errors . Can anybody help me how to solve this and proceed..just a beginner in using ubuntu. thanks W: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/l/linux-meta/linux-image-generic_2.6.32.38.44_i386.deb 404 Not Found [IP: 91.189.92.166 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/l/linux-meta/linux-headers-generic_2.6.32.38.44_i386.deb 404 Not Found [IP: 91.189.92.166 80]

    Read the article

  • Zend: Fetching row from session db table after generating session id

    - by Nux
    Hi, I'm trying to update the session table used by Zend_Session_SaveHandler_DbTable directly after authenticating the user and writing the session to the DB. But I can neither update nor fetch the newly inserted row, even though the session id I use to check (Zend_Session::getId()) is valid and the row is indeed inserted into the table. Upon fetching all session ids (on the same request) the one I newly inserted is missing from the results. It does appear in the results if I fetch it with something else. I've checked whether it is a problem with transactions and that does not seem to be the problem - there is no active transaction when I'm fetching the results. I've also tried fetching a few seconds after writing using sleep(), which doesn't help. $auth->getStorage()->write($ident); //sleep(1) $update = $this->db->update('session', array('uid' => $ident->user_id), 'id='.$this->db->quote(Zend_Session::getId())); $qload = 'SELECT id FROM session'; $load = $this->db->fetchAll($qload); echo $qload; print_r($load); $update fails. $load doesn't contain the row that was written with $auth-getStorage()-write($identity). $qload does contain the correct query - copying it to somewhere else leads to the expected result, that is the inserted row is included in the results. Database used is MySQL - InnoDB. If someone knows how to directly fix this (i.e. on the same request, not doing something like updating after redirecting to another page) without modifying Zend_Session_SaveHandler_DbTable: Thank you very much!

    Read the article

  • Hibernate: fetching multiple bags efficiently

    - by Jens Jansson
    Hi! I'm developing a multilingual application. For this reason many objects have in their name and description fields collections of something I call LocalizedStrings instead of plain strings. Every LocalizedString is basically a pair of a locale and a string localized to that locale. Let's take an example an entity, let's say a book -object. public class Book{ @OneToMany private List<LocalizedString> names; @OneToMany private List<LocalizedString> description; //and so on... } When a user asks for a list of books, it does a query to get all the books, fetches the name and description of every book in the locale the user has selected to run the app in, and displays it back to the user. This works but it is a major performance issue. For the moment hibernate makes one query to fetch all the books, and after that it goes through every single object and asks hibernate for the localized strings for that specific object, resulting in a "n+1 select problem". Fetching a list of 50 entities produces about 6000 rows of sql commands in my server log. I tried making the collections eager but that lead me to the "cannot simultaneously fetch multiple bags"-issue. Then I tried setting the fetch strategy on the collections to subselect, hoping that it would do one query for all books, and after that do one query that fetches all LocalizedStrings for all the books. Subselects didn't work in this case how i would have hoped and it basically just did exactly the same as my first case. I'm starting to run out of ideas on how to optimize this. So in short, what fetching strategy alternatives are there when you are fetching a collection and every element in that collection has one or multiple collections in itself, which has to be fetch simultaneously.

    Read the article

  • Fetching e-mails into Redmine via IMAP

    - by Danilo Bargen
    I'm trying to fetch e-mails into Redmine via IMAP. The e-mails I'm generating look like this: FooBar Ltd 123456 http://example.com/Foobar-Ltd-123456.html Project: backend Tracker: Dataerror Beschreibung: This is the description =========================== CLIENT_IP: 192.168.1.215 HTTP_USER_AGENT: mozilla/asdfjköl I try to fetch them into Redmine via this command: rake -f /var/www/projects/redmine/Rakefile redmine:email:receive_imap \ RAILS_ENV="production" host=example.com port=993 ssl=true username=redmine \ password=1234 project=myproject tracker=other \ allow_override=project,tracker,category,priority \ move_on_success=read move_on_failure=failed But the e-mails get moved into the failed folder. I had this setup running some time ago with a different e-mail generator but pretty much the same template, and I can't figure out why it's not working. The permissions seem to be OK too. In order to further debug this issue, I need some logfiles. Are there any logfiles written by this command? Or are there any other suggestions to solve this issue? My environment: danilo@jabba:/var/www/projects/redmine$ RAILS_ENV=production script/about About your application's environment Ruby version 1.8.7 (i486-linux) RubyGems version 1.3.5 Rack version 1.0 Rails version 2.3.5 Active Record version 2.3.5 Active Resource version 2.3.5 Action Mailer version 2.3.5 Active Support version 2.3.5 Application root /var/www/projects/redmine Environment production Database adapter mysql Database schema version 20100819172912

    Read the article

  • "Steam needs to be online to update" - 404 fetching *_osx.zip.*

    - by Chris Boyle
    Since yesterday evening, when I launch Steam on OSX, a self-update progress bar appears instead (at 0 of 30MB or so). This bar does not advance, an error dialog appears: Steam needs to be online to update Please confirm your network connection and try again. The app then exits. This happens whether wifi or ethernet or both are connected, and pings to the outside world succeed throughout. If I look at the logs in Console, they are very similar to this example (though that's not mine). Specifically: Success! http://store.steampowered.com/public/client/steam_client_osx?date=718277 [...] Failed! http://cdn.store.steampowered.com/public/client/breakpad_osx.zip.27f59114a86fcd50533e1d7b128f9300947f9969 Failed! http://cdn.store.steampowered.com/public/client/steam_osx.zip.11a99384214805f2dd3be5084ba6be61d662f8ac Failed! http://cdn.store.steampowered.com/public/client/miles_osx.zip.d9fb546541f59c1fdd03962a605236b1021abab8 Requesting the first URL successfully returns some data including the filenames of the latter three, and requesting any of those gives me a 404 (I've tried multiple clients on multiple continents). Searches on Google and Twitter show about 10-20 others having this problem in the past 24 hours, but hardly the angry mob I'd expect if the problem affected all Steam OSX users. Things that have already been tried with no effect: Switching between wifi and ethernet. Killing all Steam processes including ipcserver. Moving the ~/Library/Application Support/Steam/registry.vdf file away. Requesting those URLs with other clients and from other locations. Interesting: that first URL with the date parameter returns the same content even without that parameter (thus would lead to the same 404s) suggesting that the problem is not necessarily specific to coming from a particular currently-installed version of Steam.

    Read the article

  • Mail on OS X not fetching new Gmail mail – Among other issues

    - by Josh K
    It started after my 10.6.3 update. I'm basically looking for a "Clear cache" button in Mail. It won't fetch new Gmail (big problem), and there are six copies of the same message in my Inbox that won't delete (spits out the following error). I want to know if (a) anyone else has noticed this issue or (b) found a solution for a similar problem. It's worth it to note that I was getting the dreaded Finder -10810 Won't Start error, along with the progressive lose of control. This prompted two (2) hard resets of the machine and then I immediately went and installed this update. Has not happened since the update but now it's the mail issue.

    Read the article

  • Mac "Steam needs to be online to update" - 404 fetching *_osx.zip.*

    - by Chris Boyle
    Since yesterday evening, when I launch Steam on OSX, a self-update progress bar appears instead (at 0 of 30MB or so). This bar does not advance, an error dialog appears: Steam needs to be online to update Please confirm your network connection and try again. The app then exits. This happens whether wifi or ethernet or both are connected, and pings to the outside world succeed throughout. If I look at the logs in Console, they are very similar to this example (though that's not mine). Specifically: Success! http://store.steampowered.com/public/client/steam_client_osx?date=718277 [...] Failed! http://cdn.store.steampowered.com/public/client/breakpad_osx.zip.27f59114a86fcd50533e1d7b128f9300947f9969 Failed! http://cdn.store.steampowered.com/public/client/steam_osx.zip.11a99384214805f2dd3be5084ba6be61d662f8ac Failed! http://cdn.store.steampowered.com/public/client/miles_osx.zip.d9fb546541f59c1fdd03962a605236b1021abab8 Requesting the first URL successfully returns some data including the filenames of the latter three, and requesting any of those gives me a 404 (I've tried multiple clients on multiple continents). Searches on Google and Twitter show about 10-20 others having this problem in the past 24 hours, but hardly the angry mob I'd expect if the problem affected all Steam OSX users. Things that have already been tried with no effect: Switching between wifi and ethernet. Killing all Steam processes including ipcserver. Moving the ~/Library/Application Support/Steam/registry.vdf file away. Requesting those URLs with other clients and from other locations. Interesting: that first URL with the date parameter returns the same content even without that parameter (thus would lead to the same 404s) suggesting that the problem is not necessarily specific to coming from a particular currently-installed version of Steam.

    Read the article

  • chef apt_repository fetching fails

    - by slik
    I am trying to fetch a specific repository to install a php version but I keep getting 404 NOT FOUND. chef recipe code: apt_repository "dotdeb-php54" do uri "http://archives.dotdeb.org" distribution "squeeze" components ["php5/5.4.8"] key "http://www.dotdeb.org/dotdeb.gpg" end Trying to fetch : http://archives.dotdeb.org/dists/squeeze/php5/5.4.8 But get the following error : Err http://archives.dotdeb.org squeeze/php5/5.4.8 amd64 Packages 404 Not Found Hit http://security.ubuntu.com precise-security/main Translation-en Hit http://security.ubuntu.com precise-security/multiverse Translation-en Hit http://security.ubuntu.com precise-security/restricted Translation-en Err http://archives.dotdeb.org squeeze/php5/5.4.8 i386 Packages 404 Not Found Hit http://security.ubuntu.com precise-security/universe Translation-en Ign http://archives.dotdeb.org squeeze/php5/5.4.8 Translation-en STDERR: W: Failed to fetch http://archives.dotdeb.org/dists/squeeze/php5/5.4.8/binary-amd64/Packages 404 Not Found

    Read the article

  • HTTP resource caching / fetching

    - by Bobby Jack
    I'm trying to optimise a page, and I'm seeing some strange behaviour. Each time I click on a link to the page, all resources are fetched from the server, responding with 200s. However, when I refresh the page (specifically, F5 in Firefox), all resources return a 304 and - of course - the page loads much faster as a result. The main page returns a 200 in both cases. In the refresh case, If-Modified-Since headers are sent with the requests to the resources. However, in the 'clicking a link' case, they are not. What's the reason for that, and can I control it?

    Read the article

  • How do I force fetching of tags if I have the --no-tags option set

    - by douglas.meyer
    Whenever I run git fetch it fetches all the tags from origin. In a project with lots of tags, this can get quite bothersome. So I ran git config remote.origin.tagopt --no-tags so fetching will no-longer fetch tags. However, there are some times when I do want to fetch tags, or a single tag. Does anyone know how to do this? (besides removing that configuration, and running git fetch --no-tags every time) Thanks!

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >