Search Results

Search found 1509 results on 61 pages for 'offline'.

Page 53/61 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Is Git ready to be recommended to my boss?

    - by Mike Weller
    I want to recomment Git to my boss as a new source control system, since we're stuck in the 90s with VSS (ouch), but are the tools and 3rd party support good enough yet? Specifically I'm talking about GUI front-ends similar to TortoiseSVN, decent visual diff/merge support, as well as stuff like email commit notifications and general support from 3rd parties like IDEs and build systems. Even though this will be used by programmers, we really need this kind of stuff in our team. I don't want to leave everyone stuck with a new tool, and even a new source control paradigm (distributed), with nothing but a command-line app and some online tutorials. This would be a step backwards. So what do you think... is Git ready? What decent tools exist for Git and what third party development apps support it? EDIT: My original question was pretty vague so I'm updating it to specifically ask for a list of available tools and 3rd party support for Git. Maybe we can get a community wiki post with a list of stuff. I also do not consider 'use subversion' to be an adequate answer. There are other reasons to use a distributed source control system other than offline editing - private and cheap branches being one of them.

    Read the article

  • ERROR_MORE_DATA ---- Reading from Registry

    - by user314749
    I am trying to create an offline registry in memory using the offreg.dll provided in the windows ddk 7 package. You can find out more information on the offreg.dll here: MSDN Currently, while attempting to read a value from an open registry hive / key I receive the following error: 234 or ERROR_MORE_DATA Here is the .h code that contains ORGetValue: DWORD ORAPI ORGetValue ( __in ORHKEY Handle, __in_opt PCWSTR lpSubKey, __in_opt PCWSTR lpValue, __out_opt PDWORD pdwType, __out_bcount_opt(*pcbData) PVOID pvData, __inout_opt PDWORD pcbData ); Here is the code that I am using to pull the data [DllImport("offreg.dll", CharSet = CharSet.Auto, EntryPoint = "ORGetValue", SetLastError = true, CallingConvention = CallingConvention.StdCall)] public static extern uint ORGetValue(IntPtr Handle, string lpSubKey, string lpValue, out uint pdwType, out string pvData, out uint pcbData); IntPtr myHive; IntPtr myKey; string myValue; uint pdwtype; uint pcbdata; uint ret3 = ORGetValue(myKey, "", "DefaultUserName", out pdwtype, out myValue, out pcbdata); The goal is to be able to read myValue as a string. I am not sure if I need to use marshaling... or a second call with an adjusted buffer.. Or really how to adjust the buffer in C#. Any help or pointers would be greatly appreciated. Thank you.

    Read the article

  • HTML5 Database Transactions

    - by jiewmeng
    i am wondering abt the example W3C Offline Web Apps the example function renderNotes() { db.transaction(function(tx) { tx.executeSql('CREATE TABLE IF NOT EXISTS Notes(title TEXT, body TEXT)', []); tx.executeSql(‘SELECT * FROM Notes’, [], function(tx, rs) { for(var i = 0; i < rs.rows.length; i++) { renderNote(rs.rows[i]); } }); }); } has the create table before the 'main' executeSql(). will it be better if i do something like $(function() { // create table 1st db.transaction(function(tx) { tx.executeSql('CREATE TABLE IF NOT EXISTS Notes(title TEXT, body TEXT)', []); }); // when i execute say to select/modify data, i just do the actual action db.transaction(function(tx) { tx.executeSql(‘SELECT * FROM Notes’, [], function(tx, rs) { ... } }); db.transaction(function(tx) { tx.executeSql(‘INSERT ...’, [], function(tx, rs) { ... } }); }) i was thinking i don't need to keep repeating the CREATE IF NOT EXISTS right?

    Read the article

  • MS Access 2003 - Failure to create MDE file: error VBA is corrupt?

    - by Justin
    Ok so this is a brand new snag I have run into. I am trying to launch a new MDE from my source MDB file, and it is locking up Access. So in my mdb, I am first compacting and repairing, and then selecting create a new mde (just as I have done many times before). It looks like it is starting the process, but never gets to where it compacts when it is done, and access is not responding. So after I force close the app, I look in the folder where I am trying to create the MDE to and I see there is a new access db1 file there. If I try to open that it gives me an error that says file not found, and then it says the Visual Basic for Applications is corrupt. The thing is, I just made a very simple adjustment to the code since last launching an mde, and after this I double and triple checked it...its not that because its just a simple open this form and close this one addition. I did however have my source mdb file on a disc that I copied to my laptop, and then tried to re link the tables to the network drive (had them linked to other tables on my local drive so that I could develop offline)?? PLEASE HELP!!!

    Read the article

  • Simple continuously running XMPP client in python

    - by tom
    I'm using python-xmpp to send jabber messages. Everything works fine except that every time I want to send messages (every 15 minutes) I need to reconnect to the jabber server, and in the meantime the sending client is offline and cannot receive messages. So I want to write a really simple, indefinitely running xmpp client, that is online the whole time and can send (and receive) messages when required. My trivial (non-working) approach: import time import xmpp class Jabber(object): def __init__(self): server = 'example.com' username = 'bot' passwd = 'password' self.client = xmpp.Client(server) self.client.connect(server=(server, 5222)) self.client.auth(username, passwd, 'bot') self.client.sendInitPresence() self.sleep() def sleep(self): self.awake = False delay = 1 while not self.awake: time.sleep(delay) def wake(self): self.awake = True def auth(self, jid): self.client.getRoster().Authorize(jid) self.sleep() def send(self, jid, msg): message = xmpp.Message(jid, msg) message.setAttr('type', 'chat') self.client.send(message) self.sleep() if __name__ == '__main__': j = Jabber() time.sleep(3) j.wake() j.send('[email protected]', 'hello world') time.sleep(30) The problem here seems to be that I cannot wake it up. My best guess is that I need some kind of concurrency. Is that true, and if so how would I best go about that? EDIT: After looking into all the options concerning concurrency, I decided to go with twisted and wokkel. If I could, I would delete this post.

    Read the article

  • How to choose light version of database system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Express "I am afraid that my POS devices is poor with hardware for SQLExpress, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data types". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • Spider a Website and Return URLs Only

    - by Rob Wilkerson
    I'm not quite sure how best to define/articulate this, but I'm looking for a way to pseudo-spider a website. The key is that I don't actually want the content, but rather a simple list of URIs. I can get reasonably close to this idea with Wget using the --spider option, but when piping that output through a grep, I can't seem to find the right magic to make it work: wget --spider --force-html -r -l1 http://somesite.com | grep 'Saving to:' The grep filter seems to have absolutely no affect on the wget output. Have I got something wrong or is there another tool I should try that's more geared towards providing this kind of limited result set? Thanks. UPDATE So I just found out offline that, by default, wget writes to stderr. I missed that in the man pages (in fact, I still haven't found it if it's in there). Once I piped the return to stdout, I got closer to what I need: wget --spider --force-html -r -l1 http://somesite.com 2>&1 | grep 'Saving to:' I'd still be interested in other/better means for doing this kind of thing, if any exist.

    Read the article

  • Most cost effective way to target multiple mobile platforms

    - by niidto
    Hi I have been given the tasks of speccing a mobile application, which will need to run on approx. 1000 devices. These devices already exist, and consist of iPhones, BlackBerrys, Androids, Windows Mobile and Netbooks. The application will have simple reporting capability, and a collection of forms. Anyway, the obvious solution would be to develop some browser based solution, although given the occasionally connected nature of the devices, there's a potential for data to get lost / not saved. So instead of creating a complex application for each platform, I was thinking we could build what is effectively a form generator, with basic offline storage capability (text files), designed to run on each device, and have the device generate a form, based on for example an XML file that it could request from a server somewhere, resulting in minimal specialist development costs, and the ability to run most of the logic from the server end, with the devices being dumb clients that render forms and upload the data when there is an available connection. Anyway, my question summarised is, how have you made the decision on supporting multiple devices for your application. Is this always an unavoidable problem, and you just have to make the call to support 1 or 2, or pay for developers to write code for each platform, or alternatively supply pre-installed devices to the company? Many thanks James

    Read the article

  • Event feed implementation - will it scale?

    - by SlappyTheFish
    Situation: I am currently designing a feed system for a social website whereby each user has a feed of their friends' activities. I have two possible methods how to generate the feeds and I would like to ask which is best in terms of ability to scale. Events from all users are collected in one central database table, event_log. Users are paired as friends in the table friends. The RDBMS we are using is MySQL. Standard method: When a user requests their feed page, the system generates the feed by inner joining event_log with friends. The result is then cached and set to timeout after 5 minutes. Scaling is achieved by varying this timeout. Hypothesised method: A task runs in the background and for each new, unprocessed item in event_log, it creates entries in the database table user_feed pairing that event with all of the users who are friends with the user who initiated the event. One table row pairs one event with one user. The problems with the standard method are well known – what if a lot of people's caches expire at the same time? The solution also does not scale well – the brief is for feeds to update as close to real-time as possible The hypothesised solution in my eyes seems much better; all processing is done offline so no user waits for a page to generate and there are no joins so database tables can be sharded across physical machines. However, if a user has 100,000 friends and creates 20 events in one session, then that results in inserting 2,000,000 rows into the database. Question: The question boils down to two points: Is this worst-case scenario mentioned above problematic, i.e. does table size have an impact on MySQL performance and are there any issues with this mass inserting of data for each event? Is there anything else I have missed?

    Read the article

  • Php script running as scheduled task hangs - help!

    - by Ali
    Hi guys, I've built a php script that runs from the command line. It opens a connection into a pop3 email account and downloads all the emails and writes them to a database, and deletes them once downloaded. I have this script being called from the commandline by a bat file. in turn I have created a scheduled task which invokes the bat file every 5 minutes. The thing is that I have set the time out to zero for the fact that at times there could be emails with large attachments and the script actually downloads the attachments and stores them as raw files offline and the no timeout is so that the script doesnt die out during downloading. I've found that the program hangs sometimes and its a bit annoying at that - it always hangs are one point i.e. when negotiating the connection and getting connected to the mail server. And because the timeout is set to zero it seems to stay stuck up in taht position. And because of that the task is not run as its technically hung up. I want that the program should not timeout when downloading emails - however at the points where it is negotiating a connection or trying to connect to the mailserver there should be a timeout only at that point itself and not the rest of the program execution. How do I do this :(

    Read the article

  • How to parse XML to flex Data Grid contents.

    - by Jeeva
    My xml file which is in a webserver is show below. <root> <userdetails> <username>raja</username> <status>offline</status> </userdetails> <userdetails> <username>Test</username> <status>online</status> </userdetails> </root> How can i parse this to flex data grid contents. I tried with below coding < ?xml version="1.0" encoding="utf-8"? < mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" creationComplete="initApp()" < mx:HTTPService id="userList" result="handleData(event)" resultFormat="object" url="http://apps.facebook.com/ajparkin/user_list.xml" / <mx:Script> <![CDATA[ import mx.collections.ArrayCollection; import mx.rpc.events.ResultEvent; import mx.controls.Alert; public function initApp():void { userList.send(); } [Bindable] var userdetailsArray:ArrayCollection; private function handleData(evt:ResultEvent):void { this.userdetailsArray= evt.result.userdetails; } ]]> </mx:Script> <mx:DataGrid dataProvider="{userdetailsArray}"> <mx:columns> <mx:DataGridColumn dataField="username" headerText="User Name"/> <mx:DataGridColumn dataField="status" headerText="Status" /> </mx:columns> </mx:DataGrid> </mx:Application> I'm getting only the field names not the data.

    Read the article

  • Website. AJAX and FIREFOX problems. I dont think Firefox likes ajax..?

    - by DJDonaL3000
    Working on an AJAX website (HTML,CSS,JavaScript, AJAX, PHP, MySQL). I have multiple javascript functions which take rows from mysql, wrap them in html tags, and embed them in the HTML (the usual usage of AJAX). THE PROBLEM: Everything is working perfect, except when I run the site with Firefox (for once its not InternetExplorer causing the trouble). The site is currently in the developmental stage, so its offline, but running on the localhost (WampServer, apache, Windows XP SP3,VISTA,7). All other cross-browser conflicts have been removed, and works perfectly on all major browsers including IE, Chrome, Opera and Safari, but I get absolutely nothing from the HTTPRequest (AJAX) if the browser is Firefox. All browsers have the latest versions. THE CODE: I have a series of javascript functions, all of which are structured as follows: function getDatay(){ var a = document.getElementById( 'item' ).innerHTML; var ajaxRequest; try{//Browser Support Code: // code for IE7+, Firefox, Chrome, Opera, Safari: ajaxRequest = new XMLHttpRequest(); } catch (e){ // code for IE6, IE5: try{ ajaxRequest = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try{ ajaxRequest = new ActiveXObject("Microsoft.XMLHTTP"); } catch (e){ // Something went wrong alert("Your browser is not compatible - Browser Incompatibility Issue."); return false; } } } // Create a function that will receive data sent from the server ajaxRequest.onreadystatechange = function(){ if(ajaxRequest.readyState < 4){ document.getElementById( 'theDiv' ).innerHTML = 'LOADING...'; } if(ajaxRequest.readyState == 4){ document.getElementById( 'theDiv' ).innerHTML = ajaxRequest.responseText; } } //Post vars to PHP Script and wait for response: var url="01_retrieve_data_7.php"; url=url+"?a="+a; ajaxRequest.open("POST", url, false);//must be false here to wait for ajaxRequest to complete. ajaxRequest.send(null); } My money is on the final five lines of code being the cause of the problem. Any suggestions how to get Firefox and AJAX working together are most welcome...

    Read the article

  • W3C error doc error? Output tag browser support.

    - by ThomasReggi
    Was looking at the reference page here : http://www.w3.org/TR/html5/offline.html I copied and pasted the code on my server here in separate files. All of the pages are linked correctly but the clock won't show. Just to double check, it wasn't my "server config" I put it on jsfiddle.net here: http://jsfiddle.net/reggi/Dy8PU/. Fails: MAC / FIREFOX 3.6.13 Wins: MAC / FIREFOX 4.0.b8 Is this dummy example code? <!-- clock.html --> <!DOCTYPE HTML> <html> <head> <title>Clock</title> <script src="clock.js"></script> <link rel="stylesheet" href="clock.css"> </head> <body> <p>The time is: <output id="clock"></output></p> </body> </html> /* clock.css */ output { font: 2em sans-serif; } /* clock.js */ setTimeout(function () { document.getElementById('clock').value = new Date(); }, 1000); UPDATE: The W3C code above works on only the NEWEST Beta releases of certain browsers Below are some viable current javascript workarounds

    Read the article

  • Synchronizing one or more databases with a master database - Foreign keys

    - by Ikke
    I'm using Google Gears to be able to use an application offline (I know Gears is deprecated). The problem I am facing is the synchronization with the database on the server. The specific problem is the primary keys or more exactly, the foreign keys. When sending the information to the server, I could easily ignore the primary keys, and generate new ones. But then how would I know what the relations are. I had one sollution in mind, bet the I would need to save all the pk for every client. What is the best way to synchronize multiple client with one server db. Edit: I've been thinking about it, and I guess seqential primary keys are not the best solution, but what other possibilities are there? Time based doesn't seem right because of collisions which could happen. A GUID comes to mind, is that an option? It looks like generating a GUID in javascript is not that easy. I can do something with natural keys or composite keys. As I'm thinking about it, that looks like the best solution. Can I expect any problems with that?

    Read the article

  • Document management, SCM ?

    - by tsunade
    Hello, This might not be a hard core programming question, but it's related to some of the tools used by programmers I suspect. So we're a bunch of people each with a bunch of documents and a bunch of different computers on a bunch of operating systems (well, only 2, linux and windows). The best way these documents can be stored/managed is if they were available offline (the laptop might not always be online) but also synchronized between all the machines. Having a server with extra reliable storage be a "base repository" seems like a good idea to me. Using a SCM comes to my mind and I've tried Subversion, and it seems to be a good thing that it uses a centralized repository - but: When checking out the total size of the checkout is roughly double the original size. Big files or big repositories seem to slow it down. Also I've tried rsync, which might work - but it's a bit rough when it comes to the potential conflict. Finally I've tried Unison (which is a wrapping of rsync, I think) and while it works it becomes horribly slow for the big directories we have here since it has to scan everything. So the question is - is there a SCM tool out there that is actually practial to use for a big bunch of both small and big files? If thats a NO - does anyone know other tools that do this job? Thanks for reading :)

    Read the article

  • How do people handle foreign keys on clients when synchronizing to master db

    - by excsm
    Hi, I'm writing an application with offline support. i.e. browser/mobile clients sync commands to the master db every so often. I'm using uuid's on both client and server-side. When synching up to the server, the servre will return a map of local uuids (luid) to server uuids (suid). Upon receiving this map, clients updated their records suid attributes with the appropriate values. However, say a client record, e.g. a todo, has an attribute 'list_id' which holds the foreign key to the todos' list record. I use luids in foreign_keys on clients. However, when that attribute is sent over to the server, it would dirty the server db with luids rather than the suid the server is using. My current solution, is for the master server to keep a record of the mappings of luids to suids (per client id) and for each foreign key in a command, look up the suid for that particular client and use the suid instead. I'm wondering wether others have come across thus problem and if so how they have solved it? Is there a more efficient, simpler way? I took a look at this question "Synchronizing one or more databases with a master database - Foreign keys (5)" and someone seemed to suggest my current solution as one option, composite keys using suids and autoincrementing sequences and another option using -ve ids for client ids and then updating all negative ids with the suids. Both of these other options seem like a lot more work. Thanks, Saimon

    Read the article

  • GPS not update location after close and reopen app on android

    - by mrmamon
    After I closed my app for a while then reopen it again,my app will not update location or sometime it will take long time( about 5min) before update. How can I fix it? This is my code private LocationManager lm; private LocationListener locationListener; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); lm = (LocationManager) getSystemService(Context.LOCATION_SERVICE); locationListener = new mLocationListener(); lm.requestLocationUpdates( LocationManager.GPS_PROVIDER, 0, 0, locationListener); } private class myLocationListener implements LocationListener { @Override public void onLocationChanged(Location loc) { if (loc != null) { TextView gpsloc = (TextView) findViewById(R.id.widget28); gpsloc.setText("Lat:"+loc.getLatitude()+" Lng:"+ loc.getLongitude()); } } @Override public void onProviderDisabled(String provider) { // TODO Auto-generated method stub TextView gpsloc = (TextView) findViewById(R.id.widget28); gpsloc.setText("GPS OFFLINE."); } @Override public void onProviderEnabled(String provider) { // TODO Auto-generated method stub } @Override public void onStatusChanged(String provider, int status, Bundle extras) { // TODO Auto-generated method stub } }

    Read the article

  • ASP.NET - What is the best way to block the application usage?

    - by Tufo
    Our clients must pay a monthly Fee... if they don't, what is the best way to block the asp.net software usage? Note: The application runs on the client own server, its not a SaaS app... My ideas are: Idea: Host a Web Service on the internet that the application will use to know if the client can use the software. Issue 1 - What happen if the client internet fails? Or the data center fails? Possible Answer: Make each web service access to send a key that is valid for 7 or 15 days, so each web service consult will enable the software to run more 7 or 15 days, this way the application will only be locked after 7 or 15 days without consulting our web servicee. Issue 2 - And if the client don't have or don't want to enable internet access to the application? Idea 2: Send a key monthly to the client. Issue - How to make a offline key? Possible Answer: Generate a Hash using the "limit" date, so each login try on software will compare the today hash with the key? Issue 2 - Where to store the key? Possible Answer: Database (not good, too easy to change), text file, registry, code file, assembly... Any opinion will be very appreciated!

    Read the article

  • facebook graph api does not return all feed items on facebook page

    - by Nick Franceschina
    at the time of this question, if you go here: http://www.facebook.com/realplayer you'll see six posts down, I have posted a photo with a message of "#highfive Cincinnati, OH" but if you to either of these: http://graph.facebook.com/realplayer/feed http://graph.facebook.com/realplayer/tagged the JSON that is returned seemingly includes everything on the wall, except for MY post. there is another photo post from someone else down below mine, and it is showing up (and both my photo and his photo are in the "Fan photos" section) obviously, since I can see everything with these links already, it appears that access_token is not a part of the equation... BUT, some more info: if I use an access_token from a session that isn't me, I can't see the post in the JSON if I use an access_token from MY logged in session, then I DO see the post in the JSON so I'm very confused. if everyone in the world can see those posts on the wall without even authenticating, then I expect all of them to come back in the graph api as well. anyone have thoughts on this? I am aware of the "manage_page" permission... which I can use to get a list of accounts and special offline access tokens for those pages... and that's something I can explore... but it seems like alot of work when my post seemingly SHOULD be there in the graph

    Read the article

  • How to build a simulation of a login hardware token in .Net

    - by Michel
    Hi, i have a hardware token for remote login to some citrix environment. When i click the button on the device, i get an id and i can use that to login to the citrix farm. I can click the button as much as i like, and every time a new code gets generated, and they all work. Now i want to secure my private website likewise, but not with the hardware token, but with a 'token app' on my phone. So i run an app on my phone, generate a key, and use that to (partly) authenticate myself on the server. But here's the point: i don't know how it works! How can i generate 1, 2 or 100 keys at one time which i can see (on the server) are all valid, but without the server and the phone app having contact (the hardware token also is an 'offline' solution). Can you help me with a hint how i would do this? This is what i thought of so far: the phone app and the server app know (hardcoded) the same encryption key. The phone app encrypts the current time. The server app decrypts the string to the current time and if the diff between that time and the actual server time is less than 10 minutes it's an ok. Difficult for other users to fake a key, but encryption gives such nasty strings to enter, and the hardware token gives me nice things like 'H554TU8' And this is probably not how the real hardware token works, because the server and the phone app must 'know' the same encryption key. Michel

    Read the article

  • Installing a group of .deb files in Ubuntu

    - by p00ya
    Hi. I have a directory of .deb files which I copied from the cache folder of apt. There are many applications and Ubuntu updates among them. but there's no dependency failure because they were all downloaded by 'add/remove applications' and 'update manager' automatically. Now I have installed the same version of Ubuntu (9.04) and I want to install those apps and updates again(though they are not new versions). In other words, I want to make this fresh Ubuntu install exactly like the old one but without downloading any thing and using only those .deb files that I copied. All I have is an archive folder containing the .deb files and a 'pkgcache.bin' file. I know I can double-click the .deb files and install them manually but then I have to find out and follow the dependencies one by one from the installer errors. I have also tried adding an offline repository but it didn't work. I think because all of my .deb's are in on folder, and there is no separate 'main', 'restricted', ... folder?! Is there a way to do all of this automatically? thanks

    Read the article

  • How to remove a "green screen" portrait background

    - by danbystrom
    I'm looking for a way to automatically remove (=make transparent) a "green screen" portrait background from a lot of pictures. My own attempts this far have been... ehum... less successful. I'm looking around for any hints or solutions or papers on the subject. Commercial solutions are just fine, too. And before you comment and say that it is impossible to do this automatically: no it isn't. There actually exists a company which offers exactly this service, and if I fail to come up with a different solution we're going to use them. The problem is that they guard their algorithm with their lives, and therefore won't sell/license their software. Instead we have to FTP all pictures to them where the processing is done and then we FTP the result back home. (And no, they don't have an underpaid staff hidden away in the Philippines which handles this manually, since we're talking several thousand pictures a day...) However, this approach limits its usefulness for several reasons. So I'd really like a solution where this could be done instantly while being offline from the internet.

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • WCF with MANY database connections

    - by Jorge Dominguez
    I'm working in the development of an ERP type .Net WinForms application consuming a WCF service. It's to be used by many small companies (in the range of 100-200). Database is SQL Server 2008 and the service will be hosted as a Windows service. Even thought there will be a single DB Server, our customer insists in having separate databases for each company. That is because of stability/support concerns (like DB being damaged or took offline for some reason thus affecting all clients). Concerns coming from previous experiences (not necessarily with same platform). With a single database, connections to the DB would be opened at service start up and pooling used, but, I'm not sure how connections could be managed in a multiple DB scenario: Could a connection to the corresponding DB be opened and closed for each service request? would performance be acceptable? If a connection is opened and maintained for each company accessing the system, what's the practical limit of opened connections (to different databases)? It would be very interesting to hear your opinions and suggestions for this situation. Tanks

    Read the article

  • C# Type conversion between two similar Datatable objects

    - by Ali
    I have .NET project with sync framework and two separate Datasets for MS SQL and Compact SQL. in my base class I have a generic DataTable object. in my derived classed I assign Typed DataTable to the generic object based on whether the application is operating online or offline: example: if (online) _dataTable = new MSSQLDataSet.Customer; else _dataTable = new CompactSQLDataSet.Customer; Now every where in my code i have to check and do a cast based on the current network mode like this: public void changeCustomerID(int ID) { if (online) (MSSQLDataSet.CustomerDataTable)_dataTable)[i].CustomerID = value; else (CompactMSSQLDataSet.CustomerDataTable)_dataTable)[i].CustomerID = value; } but I don't think this is very efficient and I believe it can be done in a smarter way to only use one line of code by dynamically getting the Type of _dataTable on the run time. my problem is at the design time, in order to acess datatable porperties such as "CustomerID" it has to be casted to either MSSQLDataSet.CustomerDataTable or CompactMSSQLDataSet.CustomerDataTable. Is there a way to have a function or a operator to convert the _datatable to its runtime type but still be able to use it's design time properties which are the same between the two types? something like: ((aType)_dataTable)[i].CustomerID = value; //or GetRuntimeType(_dataTable)[i].CustomerID = value;

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >