Search Results

Search found 67192 results on 2688 pages for 'excel external data'.

Page 197/2688 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • Adding line with text between pattern and next occurence of the same pattern in bash

    - by kasper
    I am writing a bash script that modifies a file that looks like this: --- usr1 --- data data data data data data data data data data data data --- usr2 --- data data data data data data data data --- usr3 --- data data data data --- endline --- One question is: How to add next user line --- usrn --- after last user data lines? Second one is: How to delete specific user data lines (data lines and --- userx ---) i.e. I would like to delete usr2 with all his data set. It must work on bash 2.05 :) and I think it will use awk or sed, but I'm not sure.

    Read the article

  • How to Load Data from Text File in MySQL?

    - by Taz
    I have this file availableHere Fields are separated by space and newline starts at new line. I am using this command to load data mysql> LOAD DATA LOCAL INFILE 'C:\\Documents and Settings\\Scan\\My Documents\\Downloads\\images_en.nt\\Sample4.txt' INTO TABLE NER.Images FIELDS TERMINATED BY' ' LINES TERMINATED BY '\n';' Only one row is loaded and if I execute again No row is loaded. Where is the problem? Data can be modified to reformat if required.

    Read the article

  • Designing DAL in .NET to be "data-source independent" and not just "database independent" ?

    - by Munish Goyal
    How to design such flexible DAL (specifically in .NET) ? What interfaces .NET provides and what should be done on my own ? Its a greenfield project starting with SQL Server as data source but in future, parts of it will move to different NoSQL type of datastores. Also, we may need to experiment with lot of different datastores (like some data may have to go with Cassandra, some with RDBMS, some to other DHT etc.) Therefore easily switchable access layer will be needed. All i know right now is the 'data' and 'operations needed on that data'.

    Read the article

  • Resources to learn about engineering aspects of data analytics (OLAP, warehousing, ETL, etc.)

    - by JT
    I'm a math/stats guy, interested in learning more about the engineering aspects of "data analytics" (this may be an overly broad term, this is a case of "I don't know what I don't know", so I'm not sure how to be more specific). I'm fine with manipulating and analyzing the data once it's already stored somewhere and I can access it, and I'm fine with writing scripts and SQL queries (and have a general knowledge of things like normalization). What I don't know is the whole engineering process of capturing and storing the data. For example, terms I've heard thrown about that I only vaguely understand the meaning of include: - OLAP, OLTP - Data warehousing - ETL - ??? What's a good book (or any other resource) to learn about these kinds of things? What are things I should know about database design (normalization seems kinda "obvious" to me, something I would have done even before I knew the term -- is there anything else?)? In other words, for jobs falling under the umbrella term of "analytics engineer", what kinds of things should I know?

    Read the article

  • Is there a lightweight multipart/form-data parser in C or C++?

    - by Hongli
    I'm looking at integrating multipart form-data parsing in a web server module so that I can relieve backend web applications (often written in dynamic languages) from parsing the multipart data themselves. The multipart grammar (RFC 2046) looks non-trivial and if I implement it by hand a lot of things can go wrong. Is there already a good, lightweight multipart/form-data parser written in C or C++? I'm looking for one with no external dependencies other than the C or C++ standard library. I don't need email attachment handling or buffered I/O classes or a portability runtime or whatever, just multipart/form-data parsing. Things that I've considered: GMime - depends on glib, so no go. libapreq - too large, depends on APR, badly documented, no unit tests. I've also looked at writing a parser with Ragel, but I can't figure out how to do it because the grammar is not static: the boundary can change arbitrarily.

    Read the article

  • Generate and merge data with python multiprocessing

    - by Bobby
    I have a list of starting data. I want to apply a function to the starting data that creates a few pieces of new data for each element in the starting data. Some pieces of the new data are the same and I want to remove them. The sequential version is essentially: def create_new_data_for(datum): """make a list of new data from some old datum""" return [datum.modified_copy(k) for k in datum.k_list] data = [some list of data] #some data to start with #generate a list of new data from the old data, we'll reduce it next newdata = [] for d in data: newdata.extend(create_new_data_for(d)) #now reduce the data under ".matches(other)" reduced = [] for d in newdata: for seen in reduced: if d.matches(seen): break #so we haven't seen anything like d yet seen.append(d) #now reduced is finished and is what we want! I want to speed this up with multiprocessing. I was thinking that I could use a multiprocessing.Queue for the generation. Each process would just put the stuff it creates on, and when the processes are reducing the data, they can just get the data from the Queue. But I'm not sure how to have the different process loop over reduced and modify it without any race conditions or other issues. What is the best way to do this safely? or is there a different way to accomplish this goal better?

    Read the article

  • How to read a file with variable multi-row data in Python

    - by dr.bunsen
    I have a file that is about 100Mb that looks like this: #meta data 1 skadjflaskdjfasljdfalskdjfl sdkfjhasdlkgjhsdlkjghlaskdj asdhfk #meta data 2 jflaksdjflaksjdflkjasdlfjas ldaksjflkdsajlkdfj #meta data 3 alsdkjflasdjkfglalaskdjf This file contains one row of meta data that corresponds to several, variable length data containing only alpha-numeric characters. What is the best way to read this data into a simple list like this: data = [[#meta data 1, skadjflaskdjfasljdfalskdjflsdkfjhasdlkgjhsdlkjghlaskdjasdhfk], [#meta data 2, jflaksdjflaksjdflkjasdlfjasldaksjflkdsajlkdfj], [#meta data 3, alsdkjflasdjkfglalaskdjf]] My initial idea was to use the read() method to read the whole file into memory and then use regular expressions to parse the data into the desired format. Is there a better more pythonic way? All metadata lines start with an octothorpe and all data lines are all alpha-numeric. Thanks!

    Read the article

  • Visual Studio 2008 Macro for Building does not block thread

    - by Climber104
    Hello, I am trying to write a macro for Building my application, kicking off an external tool, and ataching the debugger to that external tool. Everything is working except for the building. It builds, but it is not blocking the thread so the external tool gets kicked off before it can finish. Is there a way I can run ExecuteCommand and wait for the thread to finish? Code is below: DTE.ExecuteCommand("ClassViewContextMenus.ClassViewProject.Build") DTE.ExecuteCommand("Tools.ExternalCommand11") Try Dim dbg2 As EnvDTE80.Debugger2 = DTE.Debugger Dim trans As EnvDTE80.Transport = dbg2.Transports.Item("Default") Dim dbgeng(1) As EnvDTE80.Engine dbgeng(0) = trans.Engines.Item("Managed") Dim proc2 As EnvDTE80.Process2 = dbg2.GetProcesses(trans, "MINIPC").Item("_nStep.exe") proc2.Attach2(dbgeng) Catch ex As System.Exception MsgBox(ex.Message) End Try

    Read the article

  • How to process large block data visualization with Flex?

    - by hydra1983
    I know that's a big topic. However, it's better to know some general ideas to handle such problems. I have an application which requires Flex to render statistics data calculated instantly on the client side from a downloaded data set. The problems are: the data set is large and needs more than 10 seconds to be downloaded. there are some filters to control the statistics calculation algorithms. If user changes the filters, it would take a long time to recalculate the result and freeze the UI.

    Read the article

  • problem transferring Win 7 operating system hard drive to be used as external hard drive

    - by itserich
    Win 7 Asus MA479 8GB Ram hard drives are 500GB The operating system was a Caviar Green, and I tried to exchange it for a Caviar Blue. The Caviar Blue took the install correctly, but the Green, the prior operating system hard drive, will not allow itself to be used as an external hard drive. I use TrueCrypt and tried to format the Green and it freezes each time at the very end of the encyrption of the partition. I took the Blue out of the system and tried to encrypt it, and same problem. I think there must be something on the hard drive that shows it was a system drive and it causes a conflict. I have tried writing over the system hard drive to fully erase everything but that does not work, it still freezes. The drives will work in a different pc, i.e. other pc where it never was the system drive. The external hard drive is connected esata through Thermaltake. I have used TrueCrypt on various pc's for years including this Win7 with no problem. Thank you.

    Read the article

  • html5 uploader + jquery drag & drop: how to store file data with FormData?

    - by lauthiamkok
    I am making a html5 drag and drop uploader with jquery, below is my code so far, the problem is that I get an empty array without any data. Is this line incorrect to store the file data - fd.append('file', $thisfile);? $('#div').on( 'dragover', function(e) { e.preventDefault(); e.stopPropagation(); } ); $('#div').on( 'dragenter', function(e) { e.preventDefault(); e.stopPropagation(); } ); $('#div').on( 'drop', function(e){ if(e.originalEvent.dataTransfer){ if(e.originalEvent.dataTransfer.files.length) { e.preventDefault(); e.stopPropagation(); // The file list. var fileList = e.originalEvent.dataTransfer.files; //console.log(fileList); // Loop the ajax post. for (var i = 0; i < fileList.length; i++) { var $thisfile = fileList[i]; console.log($thisfile); // HTML5 form data object. var fd = new FormData(); //console.log(fd); fd.append('file', $thisfile); /* var file = {name: fileList[i].name, type: fileList[i].type, size:fileList[i].size}; $.each(file, function(key, value) { fd.append('file['+key+']', value); }) */ $.ajax({ url: "upload.php", type: "POST", data: fd, processData: false, contentType: false, success: function(response) { // .. do something }, error: function(jqXHR, textStatus, errorMessage) { console.log(errorMessage); // Optional } }); } /*UPLOAD FILES HERE*/ upload(e.originalEvent.dataTransfer.files); } } } ); function upload(files){ console.log('Upload '+files.length+' File(s).'); }; then if I use another method is that to make the file data into an array inside the jquery code, var file = {name: fileList[i].name, type: fileList[i].type, size:fileList[i].size}; $.each(file, function(key, value) { fd.append('file['+key+']', value); }); but where is the tmp_name data inside e.originalEvent.dataTransfer.files[i]? php, print_r($_POST); $uploaddir = './uploads/'; $file = $uploaddir . basename($_POST['file']['name']); if (move_uploaded_file($_POST['file']['tmp_name'], $file)) { echo "success"; } else { echo "error"; } as you can see that tmp_name is needed to upload the file via php... html, <div id="div">Drop here</div>

    Read the article

  • Is there any reason why someone would want to create an Core Data model programmatically?

    - by mystify
    I wonder in which cases it would be good to make an NSManagedObjectModel completely programmatically, with NSEntityDescription instances and all this stuff. I'm that kind of person who prefers to code programmatically, rejecting Interface Builder. But when it comes to Core Data, I have a hard time figuring out why I should kill my time NOT using the nice Xcode Data Modeler tool. And since data models are stuck to a given state (except when you want to do some ugly migration operations where thinks probably go wrong and users get mad, really mad), I see no big sense in a data model that's made programmatically for the purpose of changing it all the time. Did I miss something?

    Read the article

  • MySQL Cluster data nodes - slow SELECTs

    - by Boyan Georgiev
    Hi to all. First off, I'm new to MySQL Cluster. This is my pain: I've managed to setup a MySQL Cluster with two data nodes, two SQL nodes and one management server. Everything works pretty well, except the following: my data nodes are spread across an intranet link which incurs latency into communications between the data nodes. Apparently, due to MySQL Cluster's internal partitioning schemes, when my PHP application pulls data from the cluster via SELECT queries, parts of the data are pulled from both data nodes. This makes the page appear onscreen REALLY slowly. If I bring one data node offline, the data can only be pulled from that single remaining data node, and thus, the final result (HTML output) appears on the screen in a very timely fashion. So, my question is this: can the data nodes/cluster be told to pull data from partitions stored only on a particular data node?

    Read the article

  • Seperate external and intranet portals using the same functions .htaccess

    - by jezzipin
    We are currently struggling with setting up rules for a .htaccess file for a website built upon our company product. The product is built using PLSQL and procedures can be accessed using URLs. We use this functionality to present different options to our users. These options can be injected into HTML pages using replacement tags. So, the tag [user_menu] is always replaced with: /wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for external sites and /intranet/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} for internal sites. The issue we are having is twofold. We need to write our .htaccess rules so that the user can access the functionality whether they are internal or external. So, the links should work as follows: http://www.example.com/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} or http://www.example.com/internal/wd_portal_cand.menu?p_web_site_id={variable1}&p_candidate_id={variable2} This is the other problem. As you can see for the internal link above, the procedure needs to be prefixed with internal instead or intranet. We cannot change this in our standard tags as this will affect other sites so we need to achieve this also using htaccess. Could anyone assist with this issue? I apologise if this is brief or confusing but it's something i've never done before and have been given the task of doing. I apologise for the lack of code that will be posted above however I am a front end developer and have been left to make these changes having no prior experience of .htaccess to please bare with me.

    Read the article

  • How to structure a Visual Studio project for the data access layer

    - by Akk
    I currently have a project that uses various DB access technologies mainly for showcasing or for demos. Currently we have: Namespace App.Data (App.Data.dll) Folder NHibernate Folder EntityFramework Folder LinqToSql The above structure is ok as we only use Sql Server as the DB. But going forward we will be including Oracle, MySql etc. So what would be a better structure with this in mind? I thought about: Namespace App.Data.SqlServer (App.Data.SqlServer.dll) Folder NHibernate Folder EntityFramework Folder LinqToSql Or would it just be better to have separate assemblies for each database and access technology?: Namespace App.Data.SqlServer.NHibernate (App.Data.SqlServer.NHibernate.dll) Namespace App.Data.SqlServer.EntityFramework(App.Data.SqlServer.EntityFramework.dll) Namespace App.Data.Oracle.NHibernate (App.Data.Oracle.NHibernate.dll) Namespace App.Data.MySql.NHibernate (App.Data.MySql.Oracle.dll)

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • Are there any frameworks for data subscription and update?

    - by Timothy Pratley
    There is one server with multiple clients. The clients are viewing subsets of the servers entire data. If the data that a client is viewing changes, the client should be informed of the changes so that it displays the current data. Example: Two clients are viewing a list of users in an administration screen. One client adds a new user to the list and modifies the permissions of another user. The other client sees the changes propagated to their view.

    Read the article

  • Best way to migrate servers without losing any data and with no downtime(?)

    - by ina
    This is a methodology question from a freelancer, with a corollary on MySQL.. Is there a way to migrate from an old dedicated server to a new one without losing any data in-between - and with no downtime? In the past, I've had to lose MySQL data between the time when the new server goes up (i.e., all files transferred, system up and ready), and when I take the old server down (data still transferred to old until new one takes over). There is also a short period where both are down for DNS, etc., to refresh. Is there a way for MySQL/root to easily transfer all data that was updated/inserted between a certain time frame?

    Read the article

  • Connecting to network device behind NAT from local LAN using the external port and IP

    - by lumbric
    I noticed at several different LANs connected to the Internet through a NAT the following phenomena. There is a server in the LAN and there is a port forwarding to reach this server also from outside the LAN through the NAT. E.g. consider a LAN with the address 192.168.0.* and a SSH server at 192.168.0.2 with port 22 and a forwarding from port 2222 at the NAT 192.168.0.1 to 192.168.0.2:22. If the NAT's external IP is 44.33.22.11, one can connect to the SSH server through 44.33.22.11:2222. Surprisingly this works only from outside the LAN. If one tries to connect to 44.33.22.11:2222 from behind the NAT, there is no answer. Of course one could simply use 192.168.0.2:22, but often it is simpler to use the external IP. The typical use case for me is the configuration on a laptop computer. Usually the user uses any arbitrary Internet connection to connect to his home or office server, but sometimes he will use also the LAN to connect to it and it would be annoying to have to different configurations or bookmarks. Why does it fail to connect from inside the LAN? Is there any good work around?

    Read the article

  • Is there an difference between transient properties defined in the data model, or in the custom subc

    - by mystify
    I was reading that setting the value of a transient property always results in marking the managed object as "dirty". However, what I don't get is this: If I make a subclass of NSManagedObject and use some extra properties which I don't need to be persistet, how does Core Data know about them and how can it mark the object as dirty when I access these? Again, they're not defined in the data model, so Core Data has no really good hint that they are there. Or does Core Data use some kind of introspection to analyze my custom class and figure out what properties I have in there?

    Read the article

  • R: How to write out a data.frame so that I can paste it into SO for others to read?

    - by John
    I have a large data.frame displaying some weird properties when plotted. I'd like to ask a question about it on Stackoverflow, to do that I'd like to write the data.frame out in a form that I can paste it into SO and somebody else can easily run it and have it back into a data.frame object again. Is there an easy way to accomplish this? Also, if it is really long, should I use paste bin instead of directly paste it here?

    Read the article

  • Mass data store with SQL SERVER

    - by Leo
    We need management 10,000 GPS devices, each GPS device upload a GPS data every 30 seconds, these data need to store in the database(MS SQL Server 2005). Each GPS device daily data quantity is: 24 * 60 * 2 = 2,880 10 000 10,000 GPS devices daily data quantity is: 10000 * 2880 = 28,800,000 Each GPS data approximately 160Byte, the amount of data per day is: 28,800,000 * 160 = 4.29GB We need hold at least 3 months of GPS data in the database, My question is: 1, whether SQL Server 2005 can support such a large amount of data store? 2, How to plan data table? (all GPS data storage in one table? Daily table? Each GPS device with a GPS data table?) The GPS data: GPSID varchar(21), RecvTime datetime, GPSTime datetime, IsValid bit, IsNavi bit, Lng float, Lat float, Alt float, Spd smallint, Head smallint, PulseValue bigint, Oil float, TSW1 bigint, TSW1Mask bigint, TSW2 bigint, TSW2Mask, BSW bigint, StateText varchar(200), PosText varchar(200), UploadType tinyint

    Read the article

  • C# or windows equivalent of OS X's Core Data?

    - by Nektarios
    I'm late to the boat and have only just now started using Core Data in OS X / Cocoa - it's incredible and is really changing the way I look at things. Is there an equivalent technology in C# or the modern Windows frameworks? i.e. having managed data types where you get saving, data management, deleting, searching all for free? Also wondering if there's anything like this on Linux.

    Read the article

  • What is the most efficient way to use Core Data?

    - by Eric
    I'm developing an iPad application using Core Data, and was hoping someone could clarify something about Core Data. Right now, I populate my table by making a fetch request for all of my data in viewDidLoad. I'd rather make individual fetch requests in my tableView:cellForRowAtIndexPath:. Can anyone tell me which is more efficient, and why? In other words, is it much less efficient to make lots of small requests as opposed to one big request?

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >