Search Results

Search found 58783 results on 2352 pages for 'data modeling'.

Page 177/2352 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Responsible BI for Excel, Even for Older Versions

    - by andrewbrust
    On Wednesday, I will have the honor of co-presenting, for both The Data Warehouse Institute (TDWI) and the New York Technology Council. on the subject of Excel and BI. My co-presenter will be none other than Bill Baker, who was a Microsoft Distinguished Engineer and, essentially, the father of BI at that company.  Details on the events are here and here. We'll be talking about PowerPivot, of course, but that's not all. Probably even more important than any one product, will be our discussion of whether the usual characterization of Excel as the nemesis of IT, the guilty pleasure of business users and the antithesis of formal BI is really valid and/or hopelessly intractable. Without giving away our punchline, I'll tell you that we are much more optimistic than that. There are huge upsides to Excel and while there are real dangers to using it in the BI space, there are standards and practices you can employ to ensure Excel is used responsibly. And when those practices are followed, Excel becomes quite powerful indeed. One of the keys to this is using Excel as a data consumer rather than data storage mechanism. Caching data in Excel is OK, but only if that data is (a) not modified and (b) configured for automated periodic refresh. PowerPivot meets both criteria -- it stores a read-only copy of your data in the form of a model, and once workbook containing a PowerPivot model is published to SharePoint, it can be configured for scheduled data refresh, on the server, requiring no user intervention whatsoever. Data refresh is a bit like hard drive backup: it will only happen reliably if it's automated, and super-easy to configure. PowerPivot hits a real home run here (as does Windows Home Server for PC backup, but I digress). The thing about PowerPivot is that it's an add-in for Excel 2010. What if you're not planning to go to that new version for quite a while? What if you’ve just deployed Office 2007 in your organization? What if you're still on Office 2003, or an even earlier version? What can you do immediately to share data responsibly and easily? As it turns out, there's a feature in Excel that's been around for quite a while, that can help: Web Queries.  The Web Query feature was introduced, ostensibly, to allow Excel to pull data in from Internet Web pages…for example, data in a stock quote history table will come in nicely, as will any data in a Web page that is displayed in an HTML table.  To use the feature In Excel 2007 or 2010, click the Data Tab or the ribbon and click the “From Web” button towards the left; in older versions use the corresponding option in  the menu or  toolbars.  Next, paste a URL into the resulting dialog box and tap Enter or click the Go button.  A preview of the Web page will come up, and the dialog will allow you to select the specific table within the page whose data you’d like to import.  Here’s an example: Now just click the table, click the Import button, and the Import Data dialog appears.  You can simply click OK to bring in your data or you can first click the Properties… button and configure the data import to be refreshed at an interval in minutes that you select.  Now your data’s in the spreadsheet and ready to worked with: Your data may be vulnerable to modification, but if you’ve set up the data refresh, any accidental or malicious changes will be corrected in time anyway. The thing about this feature is that it’s most useful not for public Web pages, but for pages behind the firewall.  In effect, the Web Query feature provides an incredibly easy way to consume data in Excel that’s “published” from an application.  Users just need a URL.  They don’t need to know server and database names and since the data is read-only, providing credentials may be unnecessary, or can be handled using integrated security.  If that’s not good enough, the Web Query can be saved to a special .iqy file, which can be edited to provide POST parameter data. The only requirement is that the data must be provided in an HTML table, with the first row providing the column names.  From an ASP.NET project, it couldn’t be easier: a simple bound GridView control is totally compatible.  Use a data source control with it, and you don’t even have to write any code.  Users can link to pages that are part of an application’s UI, or developers can create pages that are specially designed for the purpose of providing an interface to the Web Query import feature.  And none of this is Microsoft- or .NET-specific.  You can create pages in any language you want (PHP comes to mind) that output the result set of a query in HTML table format, and then consume that data in a Web Query.  Then build PivotTables and charts on the data, and in Excel 2007 or 2010 you can use conditional formatting to create scorecards and dashboards. This strategy allows you to create pages that function quite similarly to the OData XML feeds rendered when .NET developers create an “Astoria” WCF Data Service.  And while it’s cool that PowerPivot and Excel 2010 can import such OData feeds, it’s good to know that older versions of Excel can function in a similar fashion, and can consume data produced by virtually any Web development platform. As a final matter, instead of just telling you that “older versions” of Excel support this feature, I’ll be more specific.  To discover what the first version of Excel was to support Web queries, go to http://bit.ly/OldSchoolXL.

    Read the article

  • Exception using SQLiteDataReader

    - by galford13x
    I'm making a Custom SQLite Wrapper. This is meant to allow a presistent connection to a database. However, I receive an exception when calling this function twice. public Boolean DatabaseConnected(string databasePath) { bool exists = false; if (ConnectionOpen()) { this.Command.CommandText = string.Format(DATABASE_QUERY); using (reader = this.Command.ExecuteReader()) { while (reader.Read()) { if (string.Compare(reader[FILE_NAME_COL_HEADER].ToString(), databasePath, true) == 0) { exists = true; break; } } reader.Close(); } } return exists; } I use the above function to check if the database is currently open before executing a command or trying to open a database. The first time I execute the function, it executes with no issue. After that the reader = this.Command.ExecuteReader() throws an exception Object reference not set to an instance of an object. StackTrace: at System.Data.SQLite.SQLiteStatement.Dispose() at System.Data.SQLite.SQLite3.Reset(SQLiteStatement stmt) at System.Data.SQLite.SQLite3.Step(SQLiteStatement stmt) at System.Data.SQLite.SQLiteDataReader.NextResult() at System.Data.SQLite.SQLiteDataReader..ctor(SQLiteCommand cmd, CommandBehavior behave) at System.Data.SQLite.SQLiteCommand.ExecuteReader(CommandBehavior behavior) at System.Data.SQLite.SQLiteCommand.ExecuteReader() at EveTraderApi.Database.SQLDatabase.DatabaseConnected(String databasePath) in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApi\Database\Database.cs:line 579 at EveTraderApi.Database.SQLDatabase.OpenSQLiteDB(String filename) in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApi\Database\Database.cs:line 119 at EveTraderApiExample.Form1.CreateTableDataTypes() in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApiExample\Form1.cs:line 89 at EveTraderApiExample.Form1.Button1_ExecuteCommand(Object sender, EventArgs e) in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApiExample\Form1.cs:line 35 at System.Windows.Forms.Control.OnClick(EventArgs e) at System.Windows.Forms.Button.OnClick(EventArgs e) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(Form mainForm) at EveTraderApiExample.Program.Main() in C:\Documents and Settings\galford13x\My Documents\Visual Studio 2008\Projects\EveTrader\EveTraderApiExample\Program.cs:line 18 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • Winform radiobutton data binding

    - by Rajarshi
    I am following the "Presentation Model" design pattern suggested by Martin Fowler for my GUI architecture in a Windows Forms project. "The essence of a Presentation Model is of a fully self-contained class that represents all the data and behavior of the UI window, but without any of the controls used to render that UI on the screen. A view then simply projects the state of the presentation model onto the glass...." - Martin Fowler Read more about this pattern at www.martinfowler.com/eaaDev/PresentationModel.html I am finding the concept very fluid and easy to understand except this one issue of data binding RadioButtons to properties on the Data/Domain object. Suposing I have a Windows Form with 3 radio buttons to depict some "Mode" options as - Auto Manual Import How can I use boolean properties on Data/Domain Objects to DataBind to these buttons? I have tried many ways but to no avail. For example I would like to code like - rbtnAutoMode.DataBindings.Add("Text", myBusinessObject, "IsAutoMode"); rbtnManualMode.DataBindings.Add("Text", myBusinessObject, "IsManualMode"); rbtnImportMode.DataBindings.Add("Text", myBusinessObject, "IsImportMode"); There should be a fourth property like "SelectedMode" on the data/domain object which at the end should depict a single value like "SelectedMode = Auto". I am trying to update this property when any of the "IsAutoMode", "IsManualMode" or "IsImportMode" is changed, e.g. through the property setters. I have INotifyPropertyChanged implemented on my data/domain object so, updating any data/domain object property automatically updates my UI controls, that's not an issue. There is a good example of binding 2 radio buttons here - http://stackoverflow.com/questions/344964/how-do-i-use-databinding-with-windows-forms-radio-buttons but I am missing the link while implementing the same with 3 buttons. I am having very erratic behaviors for the Radio Buttons. I hope I was able to explain reasonably. I am actually in a hurry and could not put a detailed code on post, but any help in this regard is appreciated. There is a simple solution to this issue by exposing a method like - public void SetMode(Modes mode) { this._selectedMode = mode; } which could be called from the "CheckedChanged" event of the Radio Buttons from the UI and would perfectly set the "SelectedMode" on the business object, but I need to stretch the limits to verify whether this can be done by DataBinding.

    Read the article

  • A Python Wrapper for Shutterfly. Uploading an Image

    - by iJames
    I'm working on a Django app in which I want to order prints through Shutterfly's Open API: http://www.shutterfly.com/documentation/start.sfly So far I've been able to build the appropriate POSTs and GETs using the modules and suggested code snippets including httplib, httplib2, urllib, urllib2, mimetype, etc. But I'm stuck on the image uploading when placing an order (the ordering process is not the same process as uploading images to albums which I haven't tried.) From what I can tell, I'm supposed to basically create the multipart form data by concatenating the HTTP request body together with the binary data of the image. I take the strings: --myuniqueboundary1273149960.175.1 Content-Disposition: form-data; name="AuthenticationID" auniqueauthenticationid --myuniqueboundary1273149960.175.1 Content-Disposition: file; name="Image.Data"; filename="1_41_orig.jpg" Content-Type: image/jpeg and I put this data into it and end with the final boundary: ...\xb5|\xf88\x1dj\t@\xd9\'\x1f\xc6j\x88{\x8a\xc0\x18\x8eGaJG\x03\xe9J-\xd8\x96[\x91T\xc3\x0eTu\xf4\xaa\xa5Ty\x80\x01\x8c\x9f\xe9Z\xad\x8cg\xba# g\x18\xe2\xaa:\x829\x02\xb4["\x17Q\xe7\x801\xea?\xad7j\xfd\xa2\xdf\x81\xd2\x84D\xb6)\xa8\xcb\xc8O\\\x9a\xaf(\x1cqM\x98\x8d*\xb8\'h\xc8+\x8e:u\xaa\xf3*\x9b\x95\x05F8\xedN%\xcb\xe1B2\xa9~Tw\xedF\xc4\xfe\xe8\xfc\xa9\x983\xff\xd9... That ends up making it look like this (when I use print to debug): ... --myuniqueboundary1273149960.175.1 Content-Disposition: file; name="Image.Data"; filename="1_41_orig.jpg" Content-Type: image/jpeg ????q?ExifMM* ? ??(1?2?<??i?b?NIKON CORPORATIONNIKON D40HHQuickTime 7.62009:02:17 13:05:25Mac OS X 10.5.6%??????"?'??0220?????? ???? ? ?|_???,b???50??5 ... --myuniqueboundary1273149960.175.1-- My code for grabbing the binary data is pretty much this: filedata = open('myjpegfile.jpeg','rb').read() Which I then add to the rest of the body. I've see something like this code everywhere. I'm then using this to post the full request (with the headers too): response = urllib2.urlopen(request).read() This seems to me to be the standard way that form POSTS with files happens. Am I missing something here? At some point I might be able to make this into a library worth posting up on github, but this problem has stopped me cold in my tracks. Thanks for any insight!

    Read the article

  • how to use a tree data structure in C#

    - by matti
    I found an implementation for a tree at this SO question. Unfortunately I don't know how to use it. Also I made a change to it since LinkedList does not have Add method: delegate void TreeVisitor<T>(T nodeData); class NTree<T> { T data; List<NTree<T>> children; public NTree(T data) { this.data = data; children = new List<NTree<T>>(); } public void AddChild(T data) { children.Add(new NTree<T>(data)); } public NTree<T> GetChild(int i) { return children[i]; } public void Traverse(NTree<T> node, TreeVisitor<T> visitor) { visitor(node.data); foreach (NTree<T> kid in node.children) Traverse(kid, visitor); } } I have class named tTable and I want to store it's children and their grandchildren (...) in this tree. My need is to find immediate children and not traverse entire tree. I also might need to find children with some criteria. Let's say tTable has only name and I want to find children with names matching some criteria. tTables constructor gives name a value according to int-value (somehow). How do I use Traverse (write delegate) if I have code like this; int i = 0; Dictionary<string, NTree<tTable>> tableTreeByRootTableName = new Dictionary<string, NTree<tTable>>(); tTable aTable = new tTable(i++); tableTreeByRootTableName[aTable.Name] = new NTree(aTable); tableTreeByRootTableName[aTable.Name].AddChild(new tTable(i++)); tableTreeByRootTableName[aTable.Name].AddChild(new tTable(i++)); tableTreeByRootTableName[aTable.Name].GetChild(1).AddChild(new tTable(i++));

    Read the article

  • Cannot execute "LOAD DATA LOCAL INFILE" Mysql query in Rails after a connection reconnection

    - by Ngan
    On Rails 2.3.8 (but I think Rails 3 might have this issue as well, not sure): I get an error when trying to execute a LOAD DATA LOCAL INFILE query after reconnecting to a database. I have a process that parses a file that can potentially take a bit of time. During the parsing, Mysql closes the connection due to timeout. This is fine, I do a ActiveRecord::Base.verify_active_connections! and I get the connection back (I do this in several places through my app). However, running a LOAD DATA LOCAL INFILE statement, I get this error: Mysql::Error: The used command is not allowed with this MySQL version It's not a permission issue, I know that for sure. Check out my test in console: ActiveRecord::Base.connection.execute("LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users") [Sat Jan 08 00:09:29 2011] (9990) SQL (1.7ms) LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users => nil > ActiveRecord::Base.connection.disconnect! => #<Mysql:0x104c6f890> > ActiveRecord::Base.verify_active_connections! [Sat Jan 08 00:09:58 2011] (9990) SQL (0.2ms) SET SQL_AUTO_IS_NULL=0 => {...connection stuff...} > ActiveRecord::Base.connection.execute("LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users") [Sat Jan 08 00:10:00 2011] (9990) SQL (0.0ms) Mysql::Error: The used command is not allowed with this MySQL version: LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users ActiveRecord::StatementInvalid: Mysql::Error: The used command is not allowed with this MySQL version: LOAD DATA LOCAL INFILE '/tmp/test.infile' INTO TABLE users from ~/gems/activerecord-2.3.8/lib/active_record/connection_adapters/abstract_adapter.rb:221:in `log' from ~/gems/activerecord-2.3.8/lib/active_record/connection_adapters/mysql_adapter.rb:323:in `execute' from (irb):6 I am able to do other queries like SELECT and whatnot, and I will get the correct result. It's just this one that giving me the error. I even tested this with a fresh rails app. You'll notice that I am able to do the exact same query before the disconnect. Thanks for the help!

    Read the article

  • Improving performance on data pasting 2000 rows with validations

    - by Lohit
    I have N rows (which could be nothing less than 1000) on an excel spreadsheet. And in this sheet our project has 150 columns like this: Now, our application needs data to be copied (using normal Ctrl+C) and pasted (using Ctrl+V) from the excel file sheet on our GUI sheet. Copy pasting 1000 records takes around 5-6 seconds which is okay for our requirement, but the problem is when we need to make sure the data entered is valid. So we have to validate data in each row generate appropriate error messages and format the data as per requirement. So we need to at runtime parse and evaluate data in each row. Now all the formatting of data and validations come from the back-end database and we have it in a data-table (dtValidateAndFormatConditions). The conditions would be around 50. So you can see how slow this whole process becomes since N X 150 X 50 operations are required to complete this whole process. Initially it took approximately 2-3 minutes but now i have reduced it to 20 - 30 seconds. However i have increased the speed by making an expression parser of my own - and not by any algorithm, is there any other way i can improve performance, by using Divide and Conquer or some other mechanism. Currently i am not really sure how to go about this. Here is what part of my code looks like: public virtual void ValidateAndFormatOnCopyPaste(DataTable DtCopied, int CurRow) { foreach (DataRow dRow in dtValidateAndFormatConditions.Rows) { string Condition = dRow["Condition"]; string FormatValue = Value = dRow["Value"]; GetValidatedFormattedData(DtCopied,ref Condition, ref FormatValue ,iRowIndex); Condition = Parse(Condition); dRow["Condition"] = Condition; FormatValue = Parse(FormatValue ); dRow["Value"] = FormatValue; } } The above code gets called row-wise like this: public override void ValidateAndFormat(DataTable dtChangedRecords, CellRange cr) { int iRowStart = cr.Row, iRowEnd = cr.Row + cr.RowCount; for (int iRow = iRowStart; iRow < iRowEnd; iRow++) { ValidateAndFormatOnCopyPaste(dtChangedRecords,iRow); } } Please know my question needs a more algorithmic solution than code optimization, however any answers containing code related optimizations will be appreciated as well. (Tagged Linq because although not seen i have been using linq in some parts of my code).

    Read the article

  • Javascript terminates after trying to select data from an object passed to a function

    - by Silmaril89
    Here is my javascript: $(document).ready(function(){ var queries = getUrlVars(); $.get("mail3.php", { listid: queries["listid"], mindex: queries["mindex"] }, showData, 'html'); }); function showData(data) { var response = $(data).find("#mailing").html(); if (response == null) { $("#results").html("<h3>Server didn't respond, try again.</h3>"); } else if (response.length) { var old = $("#results").html(); old = old + "<br /><h3>" + response + "</h3>"; $("#results").html(old); var words = response.split(' '); words[2] = words[2] * 1; words[4] = words[4] * 1; if (words[2] < words[4]) { var queries = getUrlVars(); $.get("mail3.php", { listid: queries["listid"], mindex: words[2] }, function(data){showData(data);}, 'html'); } else { var done = $(data).find("#done").html(); old = old + "<br />" + done; $("#results").html(old); } } else { $("#results").html("<h3>Server responded with an empty reply, try again.</h3>"); } } function getUrlVars() { var vars = [], hash; var hashes = window.location.href.slice(window.location.href.indexOf('?') + 1).split('&'); for (var i = 0; i < hashes.length; i++) { hash = hashes[i].split('='); vars.push(hash[0]); vars[hash[0]] = hash[1]; } return vars; } After the first line in showData: var response = $(data).find("#mailing").html(); the javascript stops. If I put an alert before it, the alert pops up, after it, it doesn't pop up. There must be something wrong with using $(data), but why? Any ideas would be appreciated.

    Read the article

  • MATLAB query about for loop, reading in data and plotting

    - by mp7
    Hi there, I am a complete novice at using matlab and am trying to work out if there is a way of optimising my code. Essentially I have data from model outputs and I need to plot them using matlab. In addition I have reference data (with 95% confidence intervals) which I plot on the same graph to get a visual idea on how close the model outputs and reference data is. In terms of the model outputs I have several thousand files (number sequentially) which I open in a loop and plot. The problem/question I have is whether I can preprocess the data and then plot later - to save time. The issue I seem to be having when I try this is that I have a legend which either does not appear or is inaccurate. My code (apolgies if it not elegant): fn= xlsread(['tbobserved' '.xls']); time= fn(:,1); totalreference=fn(:,4); totalreferencelowerci=fn(:,6); totalreferenceupperci=fn(:,7); figure plot(time,totalrefrence,'-', time, totalreferencelowerci,'--', time, totalreferenceupperci,'--'); xlabel('Year'); ylabel('Reference incidence per 100,000 population'); title ('Total'); clickableLegend('Observed reference data', 'Totalreferencelowerci', 'Totalreferenceupperci','Location','BestOutside'); xlim([1910 1970]); hold on start_sim=10000; end_sim=10005; h = zeros (1,1000); for i=start_sim:end_sim %is there any way of doing this earlier to save time? a=int2str(i); incidenceFile =strcat('result_', 'Sim', '_', a, 'I_byCal_total.xls'); est_tot=importdata(incidenceFile, '\t', 1); cal_tot=est_tot.data; magnitude=1; t1=cal_tot(:,1)+1750; totalmodel=cal_tot(:,3)+cal_tot(:,5); h(a)=plot(t1,totalmodel); xlim([1910 1970]); ylim([0 500]); hold all clickableLegend(h(a),a,'Location','BestOutside') end Essentially I was hoping to have a way of reading in the data and then plot later - ie. optimise the code. I hope you might be able to help. Thanks. mp

    Read the article

  • BULK INSERT from one table to another all on the server

    - by steve_d
    I have to copy a bunch of data from one database table into another. I can't use SELECT ... INTO because one of the columns is an identity column. Also, I have some changes to make to the schema. I was able to use the export data wizard to create an SSIS package, which I then edited in Visual Studio 2005 to make the changes desired and whatnot. It's certainly faster than an INSERT INTO, but it seems silly to me to download the data to a different computer just to upload it back again. (Assuming that I am correct that that's what the SSIS package is doing). Is there an equivalent to BULK INSERT that runs directly on the server, allows keeping identity values, and pulls data from a table? (as far as I can tell, BULK INSERT can only pull data from a file) Edit: I do know about IDENTITY_INSERT, but because there is a fair amount of data involved, INSERT INTO ... SELECT is kinda of slow. SSIS/BULK INSERT dumps the data into the table without regards to indexes and logging and whatnot, so it's faster. (Of course creating the clustered index on the table once it's populated is not fast, but it's still faster than the INSERT INTO...SELECT that I tried in my first attempt) Edit 2: The schema changes include (but are not limited to) the following: 1. Splitting one table into two new tables. In the future each will have its own IDENTITY column, but for the migration I think it will be simplest to use the identity from the original table as the identity for the both new tables. Once the migration is over one of the tables will have a one-to-many relationship to the other. 2. Moving columns from one table to another. 3. Deleting some cross reference tables that only cross referenced 1-to-1. Instead the reference will be a foreign key in one of the two tables. 4. Some new columns will be created with default values. 5. Some tables aren’t changing at all, but I have to copy them over due to the "put it all in a new DB" request.

    Read the article

  • Ping remote server and wait to get data

    - by infinity
    Hi I'm building my first application for android and I've reached a point where I can't find a solution even have no idea what to search for in Google. So the problem: I am pinging a remote server with GET request through the application passing some parameters like file_id. Then the server gives back confirmation if the file exists or error otherwise, both in plain text. The error string is $$$ERROR$$$. Actually the confirmation is JSON string that holds the path to the file. If the file doesn't exists on the server it generated the error message and start downloading the file and processing it which normally takes 10-30 seconds. What would be the best way to check if the file is ready for download? I have DownloadFile class that extends AsyncTask but before I reach the point to download the file I need the URL which is dependant on the previous request which is in the main class in the UI thread. Here is some code: public class MainActivity extends Activity { private String getInfo() { // Create a new HttpClient and Post Header HttpClient httpClient = new DefaultHttpClient(); HttpGet httpPost = new HttpGet(infoUrl); StringBuilder sb = null; String data; JSONObject jObject = null; try { HttpResponse response = httpClient.execute(httpPost); // This might be equal "$$$ERROR$$$" if no file exists sb = inputStreamToString(response.getEntity().getContent()); } catch(ClientProtocolException e) { // TODO Auto-generated catch block Log.v("Error: pushItem ClientProtocolException: ", e.toString()); } catch (IOException e) { // TODO Auto-generated catch block Log.v("Error: pushItem IOException: ", e.toString()); } // Clean the data to be complaint JSON format data = sb.toString().replace("info = ", ""); try { jObject = new JSONObject(data); data = jObject.getString("h"); fileTitle = jObject.getString("title"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } downloadUrl = String.format(downloadUrl, fileId, data); return downloadUrl; } } So my idea was to get the content and if equal to $$$ERROR$$$ go into loop until JSON data is passed but I guess there is better solution. Note: I don't have control over the server output so have to deal with what I have.

    Read the article

  • Persistence scheme & state data for low memory situations (iphone)

    - by Robin Jamieson
    What happens to state information held by a class's variable after coming back from a low memory situation? I know that views will get unloaded and then reloaded later but what about some ancillary classes & data held in them that's used by the controller that launched the view? Sample scenario in question: @interface MyCustomController: UIViewController { ServiceAuthenticator *authenticator; } -(id)initWithAuthenticator:(ServiceAuthenticator *)auth; // the user may press a button that will cause the authenticator // to post some data to the service. -(IBAction)doStuffButtonPressed:(id)sender; @end @interface ServiceAuthenticator { BOOL hasValidCredentials; // YES if user's credentials have been validated NSString *username; NSString *password; // password is not stored in plain text } -(id)initWithUserCredentials:(NSString *)username password:(NSString *)aPassword; -(void)postData:(NSString *)data; @end The app delegate creates the ServiceAuthenticator class with some user data (read from plist file) and the class logs the user with the remote service. inside MyAppDelegate's applicationDidFinishLaunching: - (void)applicationDidFinishLaunching:(UIApplication *)application { ServiceAuthenticator *auth = [[ServiceAuthenticator alloc] initWithUserCredentials:username password:userPassword]; MyCustomController *controller = [[MyCustomController alloc] initWithNibName:...]; controller.authenticator = auth; // Configure and show the window [window addSubview:..]; // make everything visible [window makeKeyAndVisible]; } Then whenever the user presses a certain button, 'MyCustomController's doStuffButtonPressed' is invoked. -(IBAction)doStuffButtonPressed:(id)sender { [authenticator postData:someDataFromSender]; } The authenticator in-turn checks to if the user is logged in (BOOL variable indicates login state) and if so, exchanges data with the remote service. The ServiceAuthenticator is the kind of class that validates the user's credentials only once and all subsequent calls to the object will be to postData. Once a low memory scenario occurs and the associated nib & MyCustomController will get unloaded -- when it's reloaded, what's the process for resetting up the 'ServiceAuthenticator' class & its former state? I'm periodically persisting all of the data in my actual model classes. Should I consider also persisting the state data in these utility style classes? Is that the pattern to follow?

    Read the article

  • SQL Server 2000 tables

    - by user40766
    We currently have an SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: Does anyone have any experience with a SQL Server (2000/2005) installation with millions of tables? What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • Automating an SSRS 2008 R2 Report Snapshots and run report with most recent data

    - by Mr Shoubs
    I would like to automate a report snapshot, but there is only an option to take a snapshot in the Report History Tab. All the resources I've found suggest I need to go to processing options and select "Render this report from a snapshot". But I don't want to do that - when I go to a report, I want to get the most recent data. However daily at midnight I'd like to take a snapshot and store it in the history in case I want to compare the reports as of midnight for the last few weeks. Or am I doing this wrong and have to create a subscription instead? Note: this is for an auditing database and has way to much data in to query a range with more than 1 day in it - reports are restricted as such. (1 day has over 1 million rows on it's own).

    Read the article

  • Associating multiple data with a single entry in Open Office Base

    - by idyllhands
    I'm trying to build a database that I can use to track prices of groceries on certain dates. My problem is that I cannot figure out how to have a single entry associate with multiple data. For example, carrots. The index would be carrots. Then, a few categorizing fields (ie, Produce|Vegetable) Then, I can enter a price, date that the price was valid, store that was selling for said price, etc. And the next time I buy carrots, I can just add a new set of pricing data that would be associated with the original carrots entry. I know very little about database building, so if anyone has something I could just modify, I would greatly appreciate it. Alternatively, a step by step tutorial would be great.

    Read the article

  • Need Some comparative data on Apache and IIS

    - by zod
    This is not a pure programming question but very programming related question I am not sure where should i ask. if you dont know answer please dont downvote . If you know the right place to ask please suggest or move this question We have a web application running on PHP 5 , Zend Framework , Apche . I need some comparitive data which states the above technologies and server is one of the best and thousands of domains are using this. Do you know any website i can get this type of data listing major websites developed in PHP. Major domains runs on apache Major website running on Zend A comparitive case study on Apache and IIS or PHP and .Net

    Read the article

  • Stack , data and address space limits on an Ubuntu server

    - by PaulDaviesC
    I am running an Ubuntu server which has around 5000 users. The users are allowed to SSH in to the system. So in order to cap the memory used up by a process I have capped the address space limits using limits.conf. So my question is , should I be limiting the data and stack ? I feel that is not required since I am capping address space. Are there any pitfalls if I do not cap the stack and data limits?

    Read the article

  • How to warehouse data that is not needed from MS SQL server

    - by I__
    I have been asked to truncate a large table in MS SQL Server 2008. The data is not needed but might be needed once every two years. It will NEVER have to be changed, only viewed. The question is, since I don't need the data on a day-to-day basis, what do I do with it to protect and back it up? Please keep in mind that I will need to have it accessible maybe once every two years, and it is FINE for us if the recovery process takes a few hours. The entire table is about 3 million rows and I need to truncate it to about 1 million rows.

    Read the article

  • Using LDAP to store customer data

    - by mechcow
    We wish to store some data in 389 Directory Server LDAP that doesn't fit that well into the standard set of schema's that come with the product. Nothing too amazing, things like: when the customer joined are they currently active customer certificate[1] which environment they are using My question is this: should we register with OID and start writing up our own custom schema OR is there a standard schema definition not provided by Directory Server that we can download and use that would fit our needs? Should we munge/hack existing attributes and store the data among there (I'm strongly opposed to this, but would be interested in arguments about why its better than extending)? [1] I know there is a field for this userCertificate but we don't want to use it to authenticate the user for the purposes of binding Using CentOS 5.5 with 389 Directory Server 8.1

    Read the article

  • AWS EC2 can't execute user-data script

    - by Bloodnut
    I'm pretty new to AWS and EC2 but I want to run instances with a user script after it's booted from another instance. I have installed ec2 tools and ran the command as it's explained in various examples like here http://www.turnkeylinux.org/blog/ec2-userdata and Eric Hammond's tutorials. however when I actually use the command: "ec2-run-instances --key my-key --user-data-file myscript my-ami" it only runs the new instance but doesn't execute the script myscript contains: #!/bin/bash echo "hello" ~/output.txt I'm running ubuntu server 12.04 AMIs. the target AMIs are duplicates of the initiating instance. if I run curl http:// 169.254.169.254/latest/user-data the imported script is there.

    Read the article

  • Can't connect to wireless router anymore due to data rate problem

    - by Jay White
    I was playing around with my wireless router, and switched the mode to a fixed mode B. Now< I can no longer assoicate to the AP. Windows does not give any particular error message, but with wireshark I see that the returned error is that the client does not support the necessary data rate. My wireless card is type n, and it is set to mode a/b/g compatible. I tried setting ot to just b, however this made no difference. How can I set the data rate of my card so that I can connect again to my AP? I would prefer not to just reset the device, as there has been some configuration done that would be a pain to redo, and as well I do not have the ISP password handy. Regardless I would like to understand this situation better.

    Read the article

  • Where do deleted items go on the hard drive ?

    - by Jerry
    After reading the quote below on the Casey Anthony trial (CNN) ,I am curious about where deleted files actually go on a hard drive, how they can be seen after being deleted, and to what extent the data can be recovered (fully, partially, etc). "Earlier in the trial, experts testified that someone conducted the keyword searches on a desktop computer in the home Casey Anthony shared with her parents. The searches were found in a portion of the computer's hard drive that indicated they had been deleted, Detective Sandra Osborne of the Orange County Sheriff's Office testified Wednesday in Anthony's capital murder trial." I know some of the questions here on SO address third party software that can used for this kind of thing, but I'm more interested in how this data can be seen after deletion, where it resides on the hard drive, etc. I find the whole topic intriguing, so any additional insight is welcome.

    Read the article

  • Where do deleted items go on the hard drive?

    - by Jerry
    After reading the quote below on the Casey Anthony trial (CNN) ,I am curious about where deleted files actually go on a hard drive, how they can be seen after being deleted, and to what extent the data can be recovered (fully, partially, etc). "Earlier in the trial, experts testified that someone conducted the keyword searches on a desktop computer in the home Casey Anthony shared with her parents. The searches were found in a portion of the computer's hard drive that indicated they had been deleted, Detective Sandra Osborne of the Orange County Sheriff's Office testified Wednesday in Anthony's capital murder trial." I know some of the questions here on Super User address third party software that can used for this kind of thing, but I'm more interested in how this data can be seen after deletion, where it resides on the hard drive, etc. I find the whole topic intriguing, so any additional insight is welcome.

    Read the article

  • Recovery of Pinnacle Studio Project Files

    - by seanieb
    My external hard drive had some sort of issue a few months ago, but I was able to recover my files using a data recovery software program. However my Pinnacle studio files are not being recovered as before, they are being recovered as directory's/folders that have sub directory's and files. And I have tried with several different recovery programs and they all recover the projects as directories. And the projects all contain one file called README.TXT: * WARNING This directory contains the descriptive data of the project, split into. various subdirectories and files for better access. DO NOT EDIT, ADD, CHANGE OR MODIFY ANY OF IT'S CONTENTS! This gives me hope that I could some how just turn the directory into a .stu Pinnacle studio project file. How would I go about doing this? Or is there another way to solve this problem?

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >