Search Results

Search found 49518 results on 1981 pages for 'configuration files'.

Page 996/1981 | < Previous Page | 992 993 994 995 996 997 998 999 1000 1001 1002 1003  | Next Page >

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • What is the right path for PHP includes on a Mac?

    - by skorned
    Running Mac OS X 10.5.8, with PHP 5.2.11 Pre-installed. Using Coda 1.6.10. I'm writing PHP files, and then preview them running from file, not server. This was working fine till I tried PHP includes. These don't work as a relative path, only as an absolute from the root of the drive. Is there any way I can use statements like include_once "common/header.php"; without specifying my entire file path like so : include_once "/Volumes/Macintosh HD/Users/neil/Desktop/Website/ColoredLists_v1.0/common/base.php"; ,where ColoredLists_v1.0 is the directory with all the website files in it. I tried solutions like prepending _SERVER[DOCUMENT_ROOT] or dirname(File) to the file paths, but that didn't work as the variables were not set. Is there any easy way to do this, or a configuration I can change so that it looks in a specific directory by default instead of looking at the drive root? Currently, echo_include_path shows .: When I include this line at the start of the script, it works: set_include_path('/Volumes/Macintosh HD/Users/neil/Desktop/Website/ColoredLists_v1.0'); However, if I want to do this for all my scripts, I can't seem to make the change permanent. Even after I edited the Unix include_path in my php.ini, it doesn't seem to work.

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • Visual Studio 2008 linker wants all symbols to be resolved, not only used ones

    - by user343011
    We recently upgraded to Visual Studio 2008 from 2005, and I think those error started after that. In our solution, we have a multitude of projects. Many of those are utility projects, or projects containing core functionality used by other projects. The output of those is lib files that are linked to when building the projects generating the final binaries using the "Project dependencies..." option. One of the other projects---Let us call it ResultLib---generates a DLL, and it needs one single function from the core project. This function uses only static function from its own source file, but the project in its entirety uses a lot of low-level Windows functions and also imports a DLL---Let us call it Driver.dll. Our problem is that when building ExtLib, the linker complains about a multitude of unresolved externals, for example all functions exported from Driver.dll, since its lib file is not specified when linking. If we try to fix this by adding all lib files used by other projects that use all of the core project, our resulting ResultLib DLL ends up importing Driver.dll and also exporting all functions defined in it. How do we tell Visual Studio to only try to resolve symbols that are actually used?

    Read the article

  • How to write a simple Lexer/Parser with antlr 2.7?

    - by Burkhard
    Hello, I have a complex grammar (in antlr 2.7) which I need to extend. Having never used antlr before, I wanted to write a very simple Lexer and Parser first. I found a very good explanation for antlr3 and tried to adapt it: header{ #include <iostream> using namespace std; } options { language="Cpp"; } class P2 extends Parser; /* This will be the entry point of our parser. */ eval : additionExp ; /* Addition and subtraction have the lowest precedence. */ additionExp : multiplyExp ( "+" multiplyExp | "-" multiplyExp )* ; /* Multiplication and addition have a higher precedence. */ multiplyExp : atomExp ( "*" atomExp | "/" atomExp )* ; /* An expression atom is the smallest part of an expression: a number. Or when we encounter parenthesis, we're making a recursive call back to the rule 'additionExp'. As you can see, an 'atomExp' has the highest precedence. */ atomExp : Number | "(" additionExp ")" ; /* A number: can be an integer value, or a decimal value */ number : ("0".."9")+ ("." ("0".."9")+)? ; /* We're going to ignore all white space characters */ protected ws : (" " | "\t" | "\r" | "\n") { newline(); } ; It does generate four files without errors: P2.cpp, P2.hpp, P2TokenTypes.hpp and P2TokenTypes.txt. But now what? How do I create a working programm with that? I tried to add these files to a VS2005-WinConsole-Project but it does not compile: p2.cpp(277) : fatal error C1010: unexpected end of file while looking for precompiled header. Did you forget to add '#include "stdafx.h"' to your source?

    Read the article

  • How to invoke make install for one subdirectory of Qt project

    - by chalup
    I'm working on custom library and I wish users could just use it by adding: CONFIG += mylib to their pro files. This can be done by installing mylib.prf file to %QTDIR%/mkspec/features. I've checked out in Qt Mobility project how to create and install such file, but there is one thing I'd like to do differently. If I correctly understood the pro/pri files of Qt Mobility, inside the example projects they don't really use CONFIG += mobility, instead they add QtMobility sources to include path and share the *.obj directory with main library project. For my library I'd like to have examples that are as independent projects as possible, i.e. projects that can be compiled from anywhere once MyLib is compiled and installed. I have following directory structure: mylib | |- examples |- src |- tests \- mylib.pro It seems that the easiest way to achieve what I described above is creating mylib.pro like this: TEMPLATE = subdirs SUBDIRS += src SUBDIRS += examples tests:SUBDIRS += tests And somehow enforce invoking "cd src && make install" after building src. What is the best way to do this? Of course any other suggestions for automatic library deployment before examples compilation are welcome.

    Read the article

  • one key multiple values from different sources c#

    - by user2964034
    I am trying to make a c# program that will compare two files for me, and tell me the differences of specific parts. I have been able to get the parts I need into variables while looping through, but I now want to add these to a key with 3 values per file, so a key with 6 values overall which I will then compare to eachother later on. But I can only add 3 values at a time using the loop I have, so I need to be able to add the last 3 values to the key without overwriting the first 3. example of data from file: [\Advanced\Rules\Correlation Rules\Suspect_portscan\]; CheckDescription =S Detect Port scans; Enabled =B 0; Priority =L 3; I have managed to get what I need into variables so I have: string SigName would be "Suspect_portscan" Int Enabled, Priority, Blocking as 0 3 and null respectivly. I then want to make a dictionary type thing, with a key which would be the SigName and the first 3 values as enabled, priority, blocking. Then when looping through the second file, I want to add the 2nd files settings for the enabled, priority, blocking for the same SigName (so to the key) in the last 3 value slots. I will then compare this against itself, like 'if signame(0) != signame(3)' so if file 1 enabled is not the same as file two enabled make a note and tell me. But the problem I have is not being able to get the data into a dictionary or lookup, I'm completely stumped. It seems like I should use a dictionary with a list for the values but I cant get it working on the second loop through. Thanks.

    Read the article

  • Rebuilding old (2010) django project in 2012

    - by birgit
    I am trying to make an old Django project run again. After seemingly having solved issues with old sorl.thumbnail versions and deprecated expressions I now get this error when running python manage.py runserver I also tried to copy & paste my old files into a new Django project and get the exactly same error. Maybe someone here has a clue where the problem lies? Unhandled exception in thread started by <bound method Command.inner_run of <django.contrib.staticfiles.management.commands.runserver.Command object at 0x2a80510>> Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/django/core/management/commands/runserver.py", line 88, in inner_run self.validate(display_num_errors=True) File "/usr/lib/python2.7/dist-packages/django/core/management/base.py", line 249, in validate num_errors = get_validation_errors(s, app) File "/usr/lib/python2.7/dist-packages/django/core/management/validation.py", line 35, in get_validation_errors for (app_name, error) in get_app_errors().items(): File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 146, in get_app_errors self._populate() File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 61, in _populate self.load_app(app_name, True) File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 78, in load_app models = import_module('.models', app_name) File "/usr/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in import_module __import__(name) File "/home/me/Documents/wdws/wdws/../wdws/cityofwindows/models.py", line 73, in <module> class Image(models.Model): File "/home/me/Documents/wdws/wdws/../wdws/cityofwindows/models.py", line 83, in Image 'large': {'size': (640, 640)}, File "/usr/lib/python2.7/dist-packages/django/db/models/fields/files.py", line 233, in __init__ super(FileField, self).__init__(verbose_name, name, **kwargs) TypeError: __init__() got an unexpected keyword argument 'extra_thumbnails' I need to re-build the project just for visual documentation locally... so also any hints on how to quickly re-run outdated django-projects are very welcome!! Thanks a lot (using Ubuntu 12.04)

    Read the article

  • Java: fastest way to do random reads on huge disk file(s)

    - by cocotwo
    I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file). Now that it has been computed once, that 800MB of data is read only. I cannot hold it in memory. As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help. I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed. What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed). Is Java NIO the way to go? I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory. All I want is the fastest random reads I can get to access these 800MB of disk-based data. btw in case people wonder this is not at all the same as the question I asked not long ago: http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

    Read the article

  • In SQL Server what is most efficient way to compare records to other records for duplicates with in

    - by Glenn
    We have an SQL Server that gets daily imports of data files from clients. This data is interrelated and we are always scrubbing it and having to look for suspect duplicate records between these files. Finding and tagging suspect records can get pretty complicated. We use logic that requires some field values to be the same, allows some field values to differ, and allows a range to be specified for how different certain field values can be. The only way we've found to do it is by using a cursor based process, and it places a heavy burden on the database. So I wanted to ask if there's a more efficient way to do this. I've heard it said that there's almost always a more efficient way to replace cursors with clever JOINS. But I have to admit I'm having a lot of trouble with this one. For a concrete example suppose we have 1 table, an "orders" table, with the following 6 fields. order_id, customer_id product_id, quantity, sale_date, price We want to look through the records to find suspect duplicates on the following example criteria. These get increasingly harder. 1. Records that have the same product_id, sale_date, and quantity but different customer_id's should be marked as suspect duplicates for review. 2. Records that have the same customer_id, product_id, quantity and have sale_dates within five days of each other should be marked as suspect duplicates for review 3. Records that have the same customer_id, product_id, but different quantities within 20 units, and sales dates within five days of each other should be considered suspect. Is it possible to satisfy each one of these criteria with a single SQL Query that uses JOINS? Is this the most efficient way to do this?

    Read the article

  • PHP Include and sort by variable within file

    - by Jason Hoax
    I have written this PHP include-script but now I'm trying to sort the included files out by variables WITHIN the included php's. In other words, in each included PHP file there is a rating, now I want the ratings to be read so that when they are included they will be sorted out from highest to lowest. (scores are like 6.0 to 9.0) Kind Regards! $location = 'experiments/visualizations'; foreach (glob("$location/*.php") as $filename) { include $filename; } The included files are named randomly like: File1: $filename = "AAAA"; $projecttitle = "Project Name"; $description = "This totally explains the product"; $score = "7.6"; File 2: $filename = "BBBB"; $projecttitle = "Project Name2" $description = "This totally explains the product"; $score = "9.6"; As you can see 9.6 is higher than 7.6 but PHP sorts the includes out by name instead of variables within the file. I tried sorting, but I can't get it fixed. Help!

    Read the article

  • Improve Efficiency in Array comparison in Ruby

    - by user2985025
    Hi I am working on Ruby /cucumber and have an requirement to develop a comparison module/program to compare two files. Below are the requirements The project is a migration project . Data from one application is moved to another Need to compare the data from the existing application against the new ones. Solution : I have developed a comparison engine in Ruby for the above requirement. a) Get the data, de duplicated and sorted from both the DB's b) Put the data in a text file with "||" as delimiter c) Use the key columns (number) that provides a unique record in the db to compare the two files For ex File1 has 1,2,3,4,5,6 and file2 has 1,2,3,4,5,7 and the columns 1,2,3,4,5 are key columns. I use these key columns and compare 6 and 7 which results in a fail. Issue : The major issue we are facing here is if the mismatches are more than 70% for 100,000 records or more the comparison time is large. If the mismatches are less than 40% then comparison time is ok. Diff and Diff -LCS will not work in this case because we need key columns to arrive at accurate data comparison between two applications. Is there any other method to efficiently reduce the time if the mismatches are more thatn 70% for 100,000 records or more. Thanks

    Read the article

  • Problem Routing domains subfolder

    - by hkda150
    Hi there, I'm pretty new to ASP.NET MVC and I hope it is not a too silly question. So here it comes. I have ... a ASP.NET MVC application with a domain similar to http://mydomain/mysubfoler1/myappfolder My problem... The problem for me is the routing of my application (it worked fine without using a subfolder after the domain-name). The applications homepage loads not to bad, with css files but without ressources like images (defined in css files) and without jQuery ajax calls similar to /mycontroller/myaction links are only working once (the second time I get a page similar to this link: http://mydomain/mysubfoler1/myappfolder/myController/myController/myAction) Here's my Global.asax contaning the routing: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", "{controller}/{action}/{id}", new { controller = "myController", action = "Index", id = "" } defaults ); routes.MapRoute( "Root", "", new { controller = "myController", action = "Index", id = "" } ); } protected void Application_Start() { ViewEngines.Engines.Clear(); ViewEngines.Engines.Add(new MyApplicationWeb.LocalizationWebFormViewEngine()); RegisterRoutes(RouteTable.Routes); //RouteDebug.RouteDebugger.RewriteRoutesForTesting(RouteTable.Routes); } Any suggestions? My first suggestion was to use areas like: "mysubfolder1/myappfolder/{controller}/{action}/{id}" (but without any luck) Thank you very much for your help!

    Read the article

  • PHP: How do I rename a directory where the parent directory is variable?

    - by gsquare567
    i would like to move the files inside uploads/pension/#SOME_VARIABLE_NUMBER#/#SOME_CONSTANT_NUMBER#/ here is my code: // move pension statements // located at uploads/pension/%COMPANY_ID%/%USER_ID%/%HASH% // so just move the %USER_ID% folder to the new company $oldPensionDir = "uploads/pension/" . $demo_user[Users::companyID] . "/" . $demo_user[Users::userID] . "/"; $newPensionDir = "uploads/pension/" . $newCompanyID . "/" . $demo_user[Users::userID] . "/"; // see if the user had any files, and if so, move them if(file_exists($oldPensionDir)) { // if it doesnt exist, make it if(!file_exists($newPensionDir)) mkdir($newPensionDir); // move the folder rename($oldPensionDir, $newPensionDir); } however... when i need to make the directory with the "mkdir" function, i get: mkdir() [<a href='function.mkdir'>function.mkdir</a>]: No such file or directory ok, maybe the mkdir won't work, but what about the rename? perhaps that will make the directory if it's not there... nope! rename(uploads/pension/1001/783/,uploads/pension/1000/783/) [<a href='function.rename'>function.rename</a>]: The system cannot find the path specified. (code: 3) so, there are two errors. i'm pretty sure if the renaming works, i won't even need the mkdir, but who knows... can anyone tell me why these are errors and how to fix them? thanks!

    Read the article

  • How to combine apache requests?

    - by Bruce
    To give you the situation in abstract: I have an ajax client that often needs to retrieve 3-10 static documents from the server. Those 3-10 documents are selected by the client out of about 100 documents in total. I have no way of knowing in advance which 3-10 documents the client will require. Additionally, those 100 documents are generated from database content, and so change over time. It seems messy to me to have to make 10 ajax requests for 10 separate documents. My first thought was to write a jsp that could use the include action. ie in pseudo code for (param in params){ jsp:include page="[param]" } But it turns out the tomcat doesn't just include the html resource, it recompiles it, generating a class file every time, which also seems wasteful. Does any one know of a neat solution for combining apache requests to static files to make one request, rather than several, but without the overhead of, for example, tomcat generating extra class files for each static file and regenerating them each time the static file changes? Thanks! Hopefully my question is clear - it's a bit long-winded.

    Read the article

  • Loading JNI lib on Mac OS X?

    - by Clinton
    Background So I am attempting to load a jnilib (specifically JOGL) into Java on Mac OS X at runtime. I have been following along the relevant Stack Overflow questions: Maven and the JOGL Library Loading DLL in Java - Eclipse - JNI How to make a jar file that include all jar files The end goal for me is to package platform specific JOGL files into a JAR and unzip them into a temp directory and load them at start-up. I worked my problem back to simply attempting to load JOGL using hard-coded paths: File f = new File("/var/folders/+n/+nfb8NHsHiSpEh6AHMCyvE+++TI/-Tmp-/libjogl.jnilib"); System.load(f.toString()); f = new File ("/var/folders/+n/+nfb8NHsHiSpEh6AHMCyvE+++TI/-Tmp-/libjogl_awt.jnilib"); System.load(f.toString()); I get the following exception when attempting to use the JOGL API: Exception in thread "main" java.lang.UnsatisfiedLinkError: no jogl in java.library.path But when I specify java.library.path by adding the following JVM option: -Djava.library.path="/var/folders/+n/+nfb8NHsHiSpEh6AHMCyvE+++TI/-Tmp-/" Everything works fine. Question Is it possible use System.load (or some other variant) on Mac OS X as a replacement for -Djava.library.path that is invoked at runtime?

    Read the article

  • InPlaceBitmapMetadataWriter.TrySave() returns true but does nothing

    - by mephisto123
    On some .JPG files (EPS previews, generated by Adobe Illustrator) in Windows 7 InPlaceBitmapMetadataWriter.TrySave() returns true after some SetQuery() calls, but does nothing. Code sample: BitmapDecoder decoder; BitmapFrame frame; BitmapMetadata metadata; InPlaceBitmapMetadataWriter writer; decoder = BitmapDecoder.Create(s, BitmapCreateOptions.PreservePixelFormat | BitmapCreateOptions.IgnoreColorProfile, BitmapCacheOption.Default); frame = decoder.Frames[0]; metadata = frame.Metadata as BitmapMetadata; writer = frame.CreateInPlaceBitmapMetadataWriter(); try { writer.SetQuery("System.Title", title); writer.SetQuery(@"/app1/ifd/{ushort=" + exiftagids[0] + "} ", (title + '\0').ToCharArray()); writer.SetQuery(@"/app13/irb/8bimiptc/iptc/object name", title); return writer.TrySave(); } catch { return false; } Image sample You can reproduce problem (if you have Windows 7) by downloading image sample and using this code sample to set title on this image. Image has enough room for metadata - and this code sample works fine on my WinXP. Same code works fine on Win7 with other .JPG files. Any ideas are welcome :)

    Read the article

  • How can I make hundreds of simultaneously running processes communicate with a database through one

    - by Olfan
    Long speech short: How can I make hundreds of simultaneously running processes communicate with a database through one or few permanent sessions? The whole story: I once built a number crunching engine that handles vast amounts of large data files by forking off one child after another giving each a small number of files to work on. File locking, progress monitoring and result propagation happen in an Oracle database which all (sub-)processes access at various times using an application-specific module which encapsulates DBI. This worked well at first, but now with higher volumes of input data, the number of database sessions (one per child, and they can be very short-lived) constantly being opened and closed is becoming an issue. I now want to centralise database access so that there are only one or few fixed database sessions which handle all database access for all the (sub-)processes. The presence of the database abstraction module should make the changes easy because the function calls in the worker instances can stay the same. My problem is that I cannot think of a suitable way to enhance said module in order to establish communication between all the processes and the database connector(s). I thought of message queueing, but couldn't come up with a way of connecting a large herd of requestors with one or few database connectors in a way so that bidirectional communication is possible (for collecting the query result). An asynchronous approach could help here in that all requests are written to the same queue and the database connector servicing the request will "call back" to submit the result. But my mind fails me in generating an image clear enough so that I can paint into code. Threading instead of forking might have given me an easier start, but this would now require massive changes to the code base that I'm not prepared to do to a live system. The more I think of it, the more the base idea looks like a pre-forked web server to me only that it doesn't serve web pages but database queries. Any ideas on what to dig into, and where? Sample (pseudo) code to inspire me, links to possibly related articles, ready solutions on CPAN maybe?

    Read the article

  • Visual Studio 2010 Bootstrap Folder?

    - by Matma
    We currently run a VS2005 app/environment and im investigating upgrading to 2010. But before i can say yes its all good, go buy bossman... I have to confirm that a bootstrap for a thridparty driver install which is included on our installation cd can be used/included (its a usb security dongle so drivers MUST be included/setup for our app to run). It is in the list of available pre-reqs but i didnt put it in the vs2010 bootstrap folder so is this coming from my vs2005 install? or the fact that i had previously selected it in the installer and its saved/loaded that in vs2010? Either way its got a little error icon and says its not found so i get errors on build. I want to resolve this and have it in the list of available pre-reqs as we did for 2005. i.e. copy it to the C:\Program Files (x86)\Microsoft Visual Studio 8\SDK\v2.0\BootStrapper folder... But i cant seem to find where the bootstraps are for vs2010? does it not use them? Naturally ive searched the entire vs folder structure for appropriate files/folders but nothing? Where/How is this now done?

    Read the article

  • Import CSV to class structure as the user defines

    - by Assimilater
    I have a contact manager program and I would like to offer the feature to import csv files. The problem is that different data sources order the fields in different ways. I thought of programming an interface for the user to tell it the field order and how to handle exceptions. Here is an example line in one of many possible field orders: "ID#","Name","Rank","Address1","Address2","City","State","Country","Zip","Phone#","Email","Join Date","Sponsor ID","Sponsor Name" "Z1234","Call, Anson","STU","1234 E. 6578 S.","","Somecity","TX","United States","012345","000-000-0000","[email protected]","5/24/2010","z12343","Quantum Independence" Notice that in one data field "Name" there is a comma to separate last name and first name and in another there is not. My plan is to have a line for each field (ie ID, Name, City etc.) and a statement "import to" and list box with options like: Don't Import, BusinessJoin Date, First Name, Zip and the program recognizes those as properties of an object... I'd also like the user to be able to record preset field orders so they can re-use them for csv files from the same download source. Then I also need it to check if a record all ready exists (is there a record for Anson Call all ready?) and allow the user to tell it what to do if there is a record (ie mailing address may have changes, so if that field is filled overwrite it, or this mailing address is invalid, leave the current data untouched for this person, overwrite the rest). While I'm capable of coding this...i'm not very excited about it and I'm wondering if there's a tool or set of tools out there to all ready perform most of this functionality... I hope this makes sense...

    Read the article

  • Type errors when using same name

    - by lykimq
    I have 3 files: 1) cpf0.ml type string = char list type url = string type var = string type name = string type symbol = | Symbol_name of name 2) problem.ml: type symbol = | Ident of string 3) test.ml open Problem;; open Cpf0;; let symbol b = function | Symbol_name n -> Ident n When I combine test.ml: ocamlc -c test.ml. I received an error: This expression has type Cpf0.name = char list but an expression was expected of type string Could you please help me to correct it? Thank you very much EDIT: Thank you for your answer. I want to explain more about these 3 files: Because I am working with extraction in Coq to Ocaml type: cpf0.ml is generated from cpf.v : Require Import String. Definition string := string. Definition name := string. Inductive symbol := | Symbol_name : name -> symbol. The code extraction.v: Set Extraction Optimize. Extraction Language Ocaml. Require ExtrOcamlBasic ExtrOcamlString. Extraction Blacklist cpf list. where ExtrOcamlString I opened: open Cpf0;; in problem.ml, and I got a new problem because in problem.ml they have another definition for type string This expression has type Cpf0.string = char list but an expression was expected of type Util.StrSet.elt = string Here is a definition in util.ml defined type string: module Str = struct type t = string end;; module StrOrd = Ord.Make (Str);; module StrSet = Set.Make (StrOrd);; module StrMap = Map.Make (StrOrd);; let set_add_chk x s = if StrSet.mem x s then failwith (x ^ " already declared") else StrSet.add x s;; I was trying to change t = string to t = char list, but if I do that I have to change a lot of function it depend on (for example: set_add_chk above). Could you please give me a good idea? how I would do in this case.

    Read the article

  • File size monitoring in C#

    - by manemawanna
    Hello, I work in the Systems & admin team and have been given the task of creating a quota management application to try and encourage users to better manage there resources as we currently have issues with disc space and don't enforce hard quotas. At the moment I'm using the code below to go through all the files in a users homespace to retrieve the overall amount of space they are using. As from what I've seen else where theres no other way to do this in C#, the issue with it is theirs quite a high overhead while it retireves the size of each file then creates a total. try { long dirSize = 0; FileInfo[] FI = new DirectoryInfo("I:\\").GetFiles("*.*", SearchOption.AllDirectories); foreach (FileInfo F1 in FI) { dirSize += F1.Length; } return dirSize; } So I'm looking for a quicker way to do this or a quick way to monitor changes in the size of files while using the options avaliable through FileSystemWatcher. At the moment the only thing I can think of is creating a hashtable containing the file location and size of each file, so when a size changed event occurs I can compare the old size against the new one and update the total. Any suggestions would be greatly appreciated.

    Read the article

  • Can I attach data gathered by a form to a file that is being uploaded?

    - by Jacob
    I need customers to upload files to my website and I want to gather their name or company name and attach it to the file name or create a folder on the server with that as the name so we can keep the files organized. Using PHP to upload file PHP: if(isset($_POST['submit'])){ $target = "upload/"; $file_name = $_FILES['file']['name']; $tmp_dir = $_FILES ['file']['tmp_name']; try{ if(!preg_match('/(jpe?g|psd|ai|eps|zip|rar|tif?f|pdf)$/i', $file_name)) { throw new Exception("Wrong File Type"); exit; } move_uploaded_file($tmp_dir, $target . $file_name); $status = true; } catch (Exception $e) { $fail = true; } } Other PHPw/form: <form enctype="multipart/form-data" action="" method="post"> input type="hidden" name="MAX_FILE_SIZE" value="1073741824" /> label for="file">Choose File to Upload </label> <br />input name="file" type="file" id="file" size="50" maxlength="50" /><br /> input type="submit" name="submit" value="Upload" /> php if(isset($status)) { $yay = "alert-success"; echo "<div class=\"$yay\"> <br/> <h2>Thank You!</h2> <p>File Upload Successful!</p></div>"; } if(isset($fail)) { $boo = "alert-error"; echo "<div class=\"$boo\"> <br/> <h2>Sorry...</h2> <p>There was a problem uploading the file.</p><br/><p>Please make sure that you are trying to upload a file that is less than 50mb and an acceptable file type.</p></div>"; }

    Read the article

  • Providing downloads on ASP.net website

    - by Dave
    I need to provide downloads of large files (upwards of 2 GB) on an ASP.net website. It has been some time since I've done something like this (I've been in the thick-client world for awhile now), and was wondering on current best practices for this. Ideally, I would like: To be able to track download statistics: # of downloads is essential; actual bytes sent would be nice. To provide downloads in a way that "plays nice" with third-party download managers. Many of our users have unreliable internet connections, and being able to resume a download is a must. To allow multiple users to download the same file simultaneously. My download files are not security-sensitive, so providing a direct link ("right-click to download...") is a possibility. Is just providing a direct link sufficient, letting IIS handle it, and then using some log analyzer service (any recommendations?) to compile and report the statistics? Or do I need to intercept the download request, store some info in a database, then send a custom Response? Or is there an ASP.net user control (built-in or third party) that does this? I appreciate all suggestions.

    Read the article

  • Checking when two headers are included at the same time.

    - by fortran
    Hi, I need to do an assertion based on two related macro preprocessor #define's declared in different header files... The codebase is huge and it would be nice if I could find a place to put the assertion where the two headers are already included, to avoid polluting namespaces unnecessarily. Checking just that a file includes both explicitly might not suffice, as one (or both) of them might be included in an upper level of a nesting include's hierarchy. I know it wouldn't be too hard to write an script to check that, but if there's already a tool that does the job, the better. Example: file foo.h #define FOO 0xf file bar.h #define BAR 0x1e I need to put somewhere (it doesn't matter a lot where) something like this: #if (2*FOO) != BAR #error "foo is not twice bar" #endif Yes, I know the example is silly, as they could be replaced so one is derived from the other, but let's say that the includes can be generated from different places not under my control and I just need to check that they match at compile time... And I don't want to just add one include after the other, as it might conflict with previous code that I haven't written, so that's why I would like to find a file where both are already present. In brief: how can I find a file that includes (direct or indirectly) two other files? Thanks!

    Read the article

< Previous Page | 992 993 994 995 996 997 998 999 1000 1001 1002 1003  | Next Page >