Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 188/2416 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • Extracting data from Visual FoxPro databases

    - by whitequark
    I just got some 20Gb of data in a Visual FoxPro database with a custom frontend probably written in the same framework, and need to extract that data in any well-known format. I don't know anything about VFP in particular, but as it is SQL, there should be a way of opening an SQL console, or maybe an vfpdump utility. How can I do that? Everything I have now are a bunch of obscure binary files and a frontend executable.

    Read the article

  • Internet Explorer stores working data under %TEMP% instead of normal locations

    - by SWB
    Internet Explorer normally stores its working data in various special locations. For example: Cookies - %APPDATA%\Microsoft\Windows\Cookies History - %LOCALAPPDATA%\Microsoft\Windows\History Cache - %LOCALAPPDATA%\Microsoft\Windows\Temporary Internet Files However, for some reason, Internet Explorer occasionally stops using the normal locations and instead creates new folders under %TEMP% and/or %TEMP%\Low for this data. For example: Cookies - %TEMP%\Cookies %TEMP%\Low\Cookies History - %TEMP%\History %TEMP%\Low\History Cache - %TEMP%\Temporary Internet Files %TEMP%\Low\Temporary Internet Files Why does Internet Explorer do this, and how can make it use the normal locations again?

    Read the article

  • Using ASP.NET MVC, Linq To SQL, and StructureMap causing DataContext to cache data

    - by Dragn1821
    I'll start by telling my project setup: ASP.NET MVC 1.0 StructureMap 2.6.1 VB I've created a bootstrapper class shown here: Imports StructureMap Imports DCS.Data Imports DCS.Services Public Class BootStrapper Public Shared Sub ConfigureStructureMap() ObjectFactory.Initialize(AddressOf StructureMapRegistry) End Sub Private Shared Sub StructureMapRegistry(ByVal x As IInitializationExpression) x.AddRegistry(New MainRegistry()) x.AddRegistry(New DataRegistry()) x.AddRegistry(New ServiceRegistry()) x.Scan(AddressOf StructureMapScanner) End Sub Private Shared Sub StructureMapScanner(ByVal scanner As StructureMap.Graph.IAssemblyScanner) scanner.Assembly("DCS") scanner.Assembly("DCS.Data") scanner.Assembly("DCS.Services") scanner.WithDefaultConventions() End Sub End Class I've created a controller factory shown here: Imports System.Web.Mvc Imports StructureMap Public Class StructureMapControllerFactory Inherits DefaultControllerFactory Protected Overrides Function GetControllerInstance(ByVal controllerType As System.Type) As System.Web.Mvc.IController Return ObjectFactory.GetInstance(controllerType) End Function End Class I've modified the Global.asax.vb as shown here: ... Sub Application_Start() RegisterRoutes(RouteTable.Routes) 'StructureMap BootStrapper.ConfigureStructureMap() ControllerBuilder.Current.SetControllerFactory(New StructureMapControllerFactory()) End Sub ... I've added a Structure Map registry file to each of my three projects: DCS, DCS.Data, and DCS.Services. Here is the DCS.Data registry: Imports StructureMap.Configuration.DSL Public Class DataRegistry Inherits Registry Public Sub New() 'Data Connections. [For](Of DCSDataContext)() _ .HybridHttpOrThreadLocalScoped _ .Use(New DCSDataContext()) 'Repositories. [For](Of IShiftRepository)() _ .Use(Of ShiftRepository)() [For](Of IMachineRepository)() _ .Use(Of MachineRepository)() [For](Of IShiftSummaryRepository)() _ .Use(Of ShiftSummaryRepository)() [For](Of IOperatorRepository)() _ .Use(Of OperatorRepository)() [For](Of IShiftSummaryJobRepository)() _ .Use(Of ShiftSummaryJobRepository)() End Sub End Class Everything works great as far as loading the dependecies, but I'm having problems with the DCSDataContext class that was genereated by Linq2SQL Classes. I have a form that posts to a details page (/Summary/Details), which loads in some data from SQL. I then have a button that opens a dialog box in JQuery, which populates the dialog from a request to (/Operator/Modify). On the dialog box, the form has a combo box and an OK button that lets the user change the operator's name. Upon clicking OK, the form is posted to (/Operator/Modify) and sent through the service and repository layers of my program and updates the record in the database. Then, the RedirectToAction is called to send the user back to the details page (/Summary/Details) where there is a call to pull the data from SQL again, updating the details view. Everything works great, except the details view does not show the new operator that was selected. I can step through the code and see the DCSDataContext class being accessed to update the operator (which does actually change the database record), but when the DCSDataContext is accessed to reload the details objects, it pulls in the old value. I'm guessing that StructureMap is causing not only the DCSDataContext class but also the data to be cached? I have also tried adding the following to the Global.asax, but it just ends up crashing the program telling me the DCSDataContext has been disposed... Private Sub MvcApplication_EndRequest(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.EndRequest StructureMap.ObjectFactory.ReleaseAndDisposeAllHttpScopedObjects() End Sub Can someone please help?

    Read the article

  • How to purge old data from SVN repository

    - by Supratik
    Hi, The SVN repository is growing rapidly in size and it has almost used up the complete hard disk space. How can create a new repository from the current one with last 3 months data and purge/backup the remaining of the data. Is there are any other solution to this problem ? Regards Supratik

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • how does data clustering help in image or pattern recognition

    - by anon
    I have been playing around with different data clustering algorithms working on finding clusters between random data points represented an nodes, I keep reading that data clustering is used for image recognition. I am failing to make the connection, how does clustering data help in recognizing an image or in facial recognition. can someone explain this?

    Read the article

  • where on disk is space allocated for new files inside LVM lv with ext4 file system?

    - by Jost
    I run a multi-disk server with LVM2. Several large disks serve as LVM2 physical volumes for one volume group, containing one logical volume formatted with ext4. Nothing fancy, just your standard linear setup. Recently an additional, very small disk was added as physical volume to that volume group and I expanded both the logical volume, and the ext4 file system therein onto that disk. This lv is used to store incremental backups using rsync and is only about 30% full, there have rarely been any files deleted from it, only incremental writes. Now this new HDD I added to the pre-existing volume group has unexpectedly died on me, and the volume group won't come up because it is missing one physical volume. As fate will have it, this WAS the "in an event of catastrophic failure on the primary server"-backup, the event happened, the boss is not happy, so this kinda has to work... According to this (Part 3): http://www.novell.com/coolsolutions/appnote/19386.html it is possible to trick LVM into starting anyway by creating a new pv with identical metadata to the failed disk, which will make the volume accessible, but of course leave giant holes in the file system. I have'n tried it yet, because it involves repairing (writing to) the file system which eliminates the possibility of trying other things if it fails. Now my question is: How does this setup actually allocate disk space for new data? Is it allocated linearly from beginning to end of PVs, in the order they were added to the vg? Is it striped somehow in order to increase performance/balance load? since this defective disk was added only later to an existing lvm2 vg and lv, containing a half-empty ext4, what are the chances that there was never any data written to the defective disk? In other words: what are the chances of recovering all my data, even without the defective disk, by just starting the volume group as-is? Am I about to go spend $1500 on having 250GB of empty space recovered when I send the defective disk in for repair? Is there a way to check without mounting the file system and opening the files, hoping they contain something other than zeros? (comparing addresses of used data blocks inside ext4 to address ranges that were on the missing pv, something like that, preferably easy to automate) I know bitwise-copying the entire lv into an image file before trying to repair the ext4 would probably be a good idea, but since this lv is very large and I just suffered major file system failure on several systems it is probably a luxury I don't have... Any suggestions?

    Read the article

  • Html generate data and print from another page

    - by Hulk
    In the below code, in a.html there is this code as, <div id="tableview"></div>//Data loaded dynamically <input type="button" id="printbtn" onclick="print()"/> <script> function print() { var data=$('#tableview').html(); dataobj.print(); } In b.html I need to print a.html without opening it ,But without opening it how will the data in the div get generated and how to print only this data from b.html Thanks..

    Read the article

  • How to restore a dd overwritten disk partition?

    - by DairyKnight
    First of all, I admit I'm stupid and I didn't run proper backup of my data, but you know crap happens... So, I've used dd to overwrite the first 2GB of my 750GB NTFS partition with a FAT32 partition. I've run Photorec and EasyRecovery but all I can restore is the 2GB FAT32 partition and the files on that. Is there a way to "roll back" to the NTFS paritition, and recover - at least - some part of the 750GB data? Thanks.

    Read the article

  • Mounting GlusterFS share under www-data user

    - by Roman Newaza
    Problem: After directory is auto-mounted, Web Server has no write permissions to it. Question: How to auto-mount GlusterFS endpoint via /etc/fstab so that mount point belongs to www-data after it's mounted? For now, the mount point belongs to www-data, but after mounting it turns to root. # /etc/fstab foo.com:/st /st glusterfs defaults 0 0 Seams like I cannot define user / group as mount options for GlusterFS, at least I don't see it when man glusterfs. Thanks!

    Read the article

  • SQL replicaton - collecting data

    - by Cicik
    Hi, I have master SQL server with DB Central and a lot of satellite SQL servers with DB Client. I need to collect data from log tables(LogTable) on Client(each client has own ID in log table) to one big table on Central(LogTableCentral). Data must go only from Client to Central On each Client I want to have only data for this Client I need solution with minimal amount of work on client side because of count of clients Central is MS SQL server Enterprise, Clients are MS SQL server 2005, 2008 Thanks a lot EDIT: data can be collected periodically(for example: every day at 01:00)

    Read the article

  • problem with reload data from table view after come back from another view

    - by user129677
    I have a problem in my application. Any help will be greatly appreciated. Basically it is from view A to view B, and then come back from view B. In the view A, it has dynamic data loaded in from the database, and display on the table view. In this page, it also has the edit button, not on the navigation bar. When user tabs the edit button, it goes to the view B, which shows the pick view. And user can make any changes in here. Once that is done, user tabs the back button on the navigation bar, it saves the changes into the NSUserDefaults, goes back to the view A by pop the view B. When coming back to the view A, it should get the new data from the UIUserDefaults, and it did. I user NSLog to print out to the console and it shows the correct data. Also it should invoke the viewWillAppear: method to get the new data for the table view, but it didn't. It even did not call the tableView:numberOfRowsInSection: method. I place a NSLog statement inside this method but didn't print out in the console. as the result, the view A still has the old data. the only way to get the new data in the view A is to stop and start the application. both view A and view B are the subclass of UIViewController, with UITableViewDelegate and UITableViewDataSource. here is my code in the view A : - (void)viewWillAppear:(BOOL)animated { NSLog(@"enter in Schedule2ViewController ..."); // load in data from database, and store into NSArray object //[self.theTableView reloadData]; [self.theTableView setNeedsDisplay]; //[self.theTableView setNeedsLayout]; } in here, the "theTableView" is a UITableView variable. And I try all three cases of "reloadData", "setNeedsDisplay", and "setNeedsLayout", but didn't seem to work. in the view B, here is the method corresponding to the back button on the navigation bar. - (void)viewDidLoad { UIBarButtonItem *saveButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemSave target:self action:@selector(savePreference)]; self.navigationItem.leftBarButtonItem = saveButton; [saveButton release]; } - (IBAction) savePreference { NSLog(@"save preference."); // save data into the NSUSerDefaults [self.navigationController popViewControllerAnimated:YES]; } Am I doing in the right way? Or is there anything that I missed? Many thanks.

    Read the article

  • Getting Data From Webpages?

    - by fuzzygoat
    When looking to get data from a web page whats the recommended method if the page does not provide a structured data feed? Am I right in thinking that its just a case of doing an NSURLRequest and then hacking what you need out of the responseData(NSData*)? I am not too concerned about the implementation in Xcode, I am more curious about actually collecting the data, before I start coding a "hunt & peck" through a list of data. gary

    Read the article

  • Save data typed into PDF Form

    - by Manzoor Ahmed
    Hey I have PDF Form which would not let me save the data typed into it. Here is the form: http://www.cic.gc.ca/english/pdf/kits/forms/imm0008egen.pdf I want it to save the data typed into it so that I can email it to my relative. Any ideas? I'm using Acrobat Reader.

    Read the article

  • PHP scripts owned by www-data

    - by matnagel
    I am always running php scripts on a dedicated server as user "webroot". It would be easier for coding and administration if the scripts were owned by www-data, the apache2 user. Also feels more simple and clean. There is no ftp on this box and there are no other users or sites. Why not have the php scripts owned by www-data? If there is anything against it, what is the worst that can happen?

    Read the article

  • Is this way of storing typed objects in memory good?

    - by Pindatjuh
    This is an "is this okay, or can it be done better" question. Topic: Storing typed objects in memory. Background information: I'm building a compiler for the x86-32 platform for my language. My goal includes typed objects. Idea: Every primitive is a semi-class (it can be used as if it was a normal class, but it's stored more compact). Every class is represented by primitives and some meta-data (containing class-properties, inheritance stuff, etc.). The meta-data is complex: it doesn't use fields but instead context-switches. For primitives, the meta-data is very small, compared to a "real" class, which is alot bigger. This enables another idea that "primitives are objects", in my language, which I found nessecairy. Example: If I have an array of 32 booleans, then the pure content of this array is exactly 4 byte (32 bits of booleans). The meta-data will contain flags that the type is an array of booleans, which contains 32 entries. The meta-data is very compacted, on bit-level: using a sort of "packing" mechanism, which is read by a FSM at runtime, when doing inspection of the type (like when passing the object to methods for checking, etc.) For instance (read from left to right, top to bottom, remember vertical possition when going to the right, and check nearest column header for meaning of switch): Primitive? Array? Type-Meta 1 Byte? || Size (1 byte) 1 1 [...] 1 [...] done 0 2 Bytes? || Size (2 bytes) 1 [...] done || Size (4 bytes) 0 [...] done Integer? 1 Byte? 2 Bytes? 0 1 0 1 done 1 done 0 done Boolean? Byte? 0 1 0 done 1 done More-Primitives 0 .... Class-Stuff (Huge) 0 ... (After reaching done the data is inserted. || = byte alignement. [...] is variable sized. ... is not described here, for simplicity. And let's call them cost-based-data-structures.) For an array of 32 booleans containing all true values, the memory for this type would be (read top-down): 1 Primitive 1 Array 1 ArrayType: Primitive 0 Not-Array 0 Not-Integer 1 Boolean 0 Not-Byte (thus bit) 1 Integer Size: 1 Byte 00100000 Array size 11111111 11111111 11111111 11111111 Data Thus, 8 bytes represent 32 booleans in an array: 11100101 00100000 11111111 11111111 11111111 11111111 Is this okay, or can it be done better?

    Read the article

  • Migrate data from SQL Compact to SQL Server 2008

    - by Martin
    I need to do a one-time migration of data from SQL Server Compact Edition to SQL Server 2008 Express Edition. I'm looking for a tool to do this kind of migration. I've tried using Import and Export Data in SQL Server, but it doesn't let me import from SQL Server Compact Edition. Anyone knows of a easy way to do it?

    Read the article

  • Data aggregation mongodb vs mysql

    - by Dimitris Stefanidis
    I am currently researching on a backend to use for a project with demanding data aggregation requirements. The main project requirements are the following. Store millions of records for each user. Users might have more than 1 million entries per year so even with 100 users we are talking about 100 million entries per year. Data aggregation on those entries must be performed on the fly. The users need to be able to filter on the entries by a ton of available filters and then present summaries (totals , averages e.t.c) and graphs on the results. Obviously I cannot precalculate any of the aggregation results because the filter combinations (and thus the result sets) are huge. Users are going to have access on their own data only but it would be nice if anonymous stats could be calculated for all the data. The data is going to be most of the time in batch. e.g the user will upload the data every day and it could like 3000 records. In some later version there could be automated programs that upload every few minutes in smaller batches of 100 items for example. I made a simple test of creating a table with 1 million rows and performing a simple sum of 1 column both in mongodb and in mysql and the performance difference was huge. I do not remember the exact numbers but it was something like mysql = 200ms , mongodb = 20 sec. I have also made the test with couchdb and had much worse results. What seems promising speed wise is cassandra which I was very enthusiastic about when I first discovered it. However the documentation is scarce and I haven't found any solid examples on how to perform sums and other aggregate functions on the data. Is that possible ? As it seems from my test (Maybe I have done something wrong) with the current performance its impossible to use mongodb for such a project although the automated sharding functionality seems like a perfect fit for it. Does anybody have experience with data aggregation in mongodb or have any insights that might be of help for the implementation of the project ? Thanks, Dimitris

    Read the article

  • Satellite data in Russia?

    - by Eddy
    Anyone familiar with options for transmitting data in Russia? I'd be interested in hearing about low-speed packet data and faster. Not really looking at VSAT initially as I'd like to keep the power requirements low unless we find no other options.

    Read the article

  • Data Execution Prevention in Windows Live Messenger

    - by Andrija
    I keep getting "Data Execution Prevention" error in Windows Live Messenger. I have noticed that this is happening usually when I leave computer to get coffee, and screensaver comes up, WLM breaks. Is there any way to prevent this error from happening? I see I can turn off this "Data Execution Prevention", but is that safe, since I know that WLM is under heavy attacks from spammers/hackers? Thanks

    Read the article

  • Add Data to DataRepeater Control in winform

    - by Mohsan
    hi. Visual Studio 2008 service pack 1 comes with Visual Basic Powerpack and has DataRepeatr control. i want to know that how I can add data in this control. i have in memory data. the examples i found on net are about binding DataSet to DataRepeater by fetching data from database. i want to bind in memory data. how to do this.

    Read the article

  • Merits of .NET ORM data access methods Enity Framework vs. NHibernate vs. Subsonic vs. ADO.NET Datas

    - by Lloyd
    I have recently heard "fanboys" of different .NET ORM methodologies express strong, if not outlandish oppinions of other ORM methodologies. And frankly feel a bit in the dark. Could you please explain the key merits of each of these .NET ORM solutions? Entity Framework NHibernate Subsonic ADO.NET Datasets I have a good understanding of 1&4, and a cursory understanding of 2&3, but apparently not enough to understand the implied cultural perceptions of one towards the other.

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >