Search Results

Search found 62526 results on 2502 pages for 'data processing'.

Page 46/2502 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Data Import Resources for Release 17

    - by Pete
    With Release 17 you now have three ways to import data into CRM On Demand: The Import Assistant Oracle Data Loader On Demand, a new, Java-based, command-line utility with a programmable API Web Services We have created the Data Import Options Overview document to help you choose the method that works best for you. We have also created the Data Import Resources page as a single point of reference. It guides you to all resources related to these three import options. So if you're looking for the Data Import Options Overview document, the Data Loader Overview for Release 17, the Data Loader User Guide, or the Data Loader FAQ, here's where you find them: On our new Training and Support Center, under the Learn More tab, go to the What's New section and click Data Import Resources.

    Read the article

  • Rails Easy Data Dumping

    - by Madhan ayyasamy
    Hi Friends,The following useful snippets,you can find out the easiest way of ruby on rails environment data dumping. You’ll often need to get data from production to dev or dev to your local or your local to another developer’s local. One plug-in we use over and over is Yaml_db. This nifty little plug-in enables you to dump or load data by issuing a Rake command. The data is persisted in a yaml file located in db/data.yml. This is very portable and easy to read if you need to examine the data.01rake db:data:dump02 03example data found in db/data.yml04 05---06campaigns:07  columns:08  - id09  - client_id10  - name11  - created_at12  - updated_at13  - token14  records:15  - - "1"16    - "1"17    - First push18    - 2008-11-03 18:23:5319    - 2008-11-03 18:23:5320    - 3f2523f6a66521  - - "2"22    - "2"23    - First push24    - 2008-11-03 18:26:5725    - 2008-11-03 18:26:5726    - 9ee8bc427d94

    Read the article

  • Transmitting Form Data from the Client to the Web Server

    The steps involved in transmitting form data from the client to the web server User loads web form User enters data in to web form fields User clicks submit On submit page validates fields using JavaScript. If validation errors are found then the validation script stops the browser from canceling posting the data to the web server and displays error messages as needed If the form passes the data validation process then the browser will URL encode the values of every field and post it to the server.  The server reads the posted data from the query string and then again validates the data just to ensure data consistency and to prevent any non-validated data because JavaScript was turned off on the clients browser from being inserted in to a database or passed on to other process If the data passes the second validation check then the server side code will continue with the requested processes

    Read the article

  • Which data structure you will use to for a witness list?

    - by mateen
    I'm making a game where the plot is a bank robbery. Lots of people witness that robbery. The game will load a list of suspects, while the players (witnesses) will have to identify the suspects of this robbery. The game should load a list of suspects to identify the one as quickly as possible. Admin can add/remove suspects in the lists and two or more lists of suspects can also be merged into one (to show it to the player). The question is which data structure will be suitable to develop the lists?

    Read the article

  • Is there a data structure for this type of list/map?

    - by Nick
    Perhaps there's a name for what I want, but I'm not aware of it. I need something similar to a LinkedHashMap in Java, but where it returns the 'previous' value if there's no value at the specified key. That is, I have a list of objects stored by an integer key (which is in units of time in my case): ; key->value 10->A 15->B 20->C So, if I were to query for a value for key 0-9, it would return null. The special part is if I queried for something 10 <= i <= 14 it would return A. Or, for i = 20, it would return C. Is there a data structure for this?

    Read the article

  • On a failing hard drive, I am able to view data but unable to copy it - why?

    - by Tom
    I have a 2.5" external hard drive that is failing. It's not making the expected 'clicking' noise that most hard drives and I am able to view the data, but I am unable to actually retrieve the data. I attempted to use SpinRite in order to access the data on the drive, but it didn't like the external drive. When I view the drive's property page, the drive shows that it's used space is at 100% and that it has 0 bytes available; however, the progress indicator under the drive icon in Windows Explorer shows that it's roughly 50% full (which is correct). When I attempt to run Windows' "Error Checking" tool and attempt to "scan for an attempt recovery of bad sectors," the tool begins to run then immediately closes with no error message. I am able to browse the contents of the drive using Windows Explorer. When I begin to try copying any given single file, the copy process begins, an indicator starts, and then the copy fails with no real error message. The Disk Management page in Computer Management under Control Panel also shows this drive has being 'Healthy.' I dropped the drive off at a data recovery store and they said that "The data seems to be intact, but an internal failure is preventing any information from being retrieved." They offered to provide me references to a data recovery specialist. I've also attempted to run CHKDSK on the drive (with and without arguments) but it returns the following error: The type of the filesystem is RAW. CHKDSK is not available for RAW drives. Before going the route of more expensive data recovery, I'm wondering if these symptoms sound familiar to anyone? Other questions... I'm willing to continue trying tools such as TestDisk and/or PhotoRec (as the majority of the data that I'd like to salvage are photos) but how long I should expect either tool to run given approximately 400GB of data? I'm also comfortable using Linux so I welcome any suggestions for utilities or tools and strategies with which you've had success.

    Read the article

  • I want to change DPI with Imagemagick without changing the actual byte-size of the image data

    - by user1694803
    I feel so horribly sorry that I have to ask this question here, but after hours of researching how to do an actually very simple task I'm still failing... In Gimp there is a very simple way to do what I want. I only have the German dialog installed but I'll try to translate it. I'm talking about going to "Picture-PrintingSize" and then adjusting the Values "X-Resolution" and "Y-Resolution" which are known to me as so called DPI values. You can also choose the format which by default is "Pixel/Inch". (In German the dialog is "Bild-Druckgröße" and there "X-Auflösung" and "Y-Auflösung") Ok, the values there are often "72" by default. When I change them to e.g. "300" this has the effect that the image stays the same on the computer, but if I print it, it will be smaller if you look at it, but all the details are still there, just smaller - it has a higher resolution on the printed paper (but smaller size... which is fine for me). I am often doing that when I am working with LaTeX, or to be exact with the command "pdflatex" on a recent Ubuntu-Machine. When I'm doing the above process with Gimp manually everything works just fine. The images will appear smaller in the resulting PDF but with high printing quality. What I am trying to do is to automate the process of going into Gimp and adjusting the DPI values. Since Imagemagick is known to be superb and I used it for many other tasks I tried to achieve my goal with this tool. But it does just not do what I want. After trying a lot of things I think this actually is be the command that should be my friend: convert input.png -density 300 output.png This should set the DPI to 300, as I can read everywhere in the web. It seems to work. When I check the file it stays the same. file input.png output.png input.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced output.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced When I use this command, it seems like it did what I wanted: identify -verbose output.png | grep 300 Resolution: 300x300 PNG:pHYs : x_res=300, y_res=300, units=0 (Funny enough, the same output comes for input.png which confuses me... so this might be the wrong parameters to watch?) But when I now render my TeX with "pdflatex" the image is still big and blurry. Also when I open the image with Gimp again the DPI values are set to "72" instead of "300". So there actually was no effect at all. Now what is the problem here. Am I getting something completely wrong? I can't be that wrong since everything works just fine with Gimp... Thanks for any help in this. I am also open to other automated solutions which are easily done on a Linux system...

    Read the article

  • Cannot install or remove packages

    - by Nuno
    I tried to install the openoffice.org package but it gave an error. Now i cannot repair, remove or install nothing. The Software center of xubuntu gives this error log. Nothing seems to solve this. I tried apt-get install -f but it does not solve the problem either. Any suggestions? installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 184189 files and directories currently installed.) Unpacking libreoffice-common (from .../libreoffice-common_1%3a3.5.4-0ubuntu1.1_all.deb) ... dpkg: error processing /var/cache/apt/archives/libreoffice-common_1%3a3.5.4-0ubuntu1.1_all.deb (--unpack): trying to overwrite '/usr/bin/soffice', which is also in package openoffice.org-debian-menus 3.2-9502 No apport report written because MaxReports is reached already rmdir: erro ao remover /var/lib/libreoffice/share/prereg/: Ficheiro ou directoria inexistente rmdir: erro ao remover /var/lib/libreoffice/share/: Directoria no vazia rmdir: erro ao remover /var/lib/libreoffice/program/: Ficheiro ou directoria inexistente rmdir: erro ao remover /var/lib/libreoffice: Directoria no vazia rmdir: erro ao remover /var/lib/libreoffice: Directoria no vazia Processing triggers for desktop-file-utils ... Processing triggers for shared-mime-info ... Processing triggers for gnome-icon-theme ... Processing triggers for hicolor-icon-theme ... Processing triggers for man-db ... Errors were encountered while processing: /var/cache/apt/archives/libreoffice-common_1%3a3.5.4-0ubuntu1.1_all.deb dpkg: dependency problems prevent configuration of libreoffice-java-common: libreoffice-java-common depends on libreoffice-common; however: Package libreoffice-common is not installed. dpkg: error processing libreoffice-java-common (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-filter-mobiledev: libreoffice-filter-mobiledev depends on libreoffice-java-common; however: Package libreoffice-java-common is not configured yet. dpkg: error processing libreoffice-filter-mobiledev (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-base: libreoffice-base depends on libreoffice-java-common (>= 1:3.5.4~); however: Package libreoffice-java-common is not configured yet. dpkg: error processing libreoffice-base (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-core: libreoffice-core depends on libreoffice-common (>> 1:3.5.4); however: Package libreoffice-common is not installed. dpkg: error processing libreoffice-core (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-style-human: libreoffice-style-human depends on libreoffice-core; however: Package libreoffice-core is not configured yet. dpkg: error processing libreoffice-style-human (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-math: libreoffice-math depends on libreoffice-core (= 1:3.5.4-0ubuntu1.1); however: Package libreoffice-core is not configured yet. dpkg: error processing libreoffice-math (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-impress: libreoffice-impress depends on libreoffice-core (= 1:3.5.4-0ubuntu1.1); however: Package libreoffice-core is not configured yet. dpkg: error processing libreoffice-impress (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-style-tango: libreoffice-style-tango depends on libreoffice-core; however: Package libreoffice-core is not configured yet. dpkg: error processing libreoffice-style-tango (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice: libreoffice depends on libreoffice-core (= 1:3.5.4-0ubuntu1.1); however: Package libreoffice-core is not configured yet. libreoffice depends on libreoffice-impress; however: Package libreoffice-impress is not configured yet. libreoffice depends on libreoffice-math; however: Package libreoffice-math is not configured yet. libreoffice depends on libreoffice-base; however: Package libreoffice-base is not configured yet. libreoffice depends on libreoffice-filter-mobiledev; however: Package libreoffice-filter-mobiledev is not configured yet. libreoffice depends on libreoffice-java-common (>= 1:3.5.4~); however: Package libreoffice-java-common is not configured yet. dpkg: error processing libreoffice (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-writer: libreoffice-writer depends on libreoffice-core (= 1:3.5.4-0ubuntu1.1); however: Package libreoffice-core is not configured yet. dpkg: error processing libreoffice-writer (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of mythes-en-us: mythes-en-us depends on libreoffice-core | openoffice.org-core (>= 1.9) | language-support-writing-en; however: Package libreoffice-core is not configured yet. Package openoffice.org-core is not installed. Package language-support-writing-en is not installed. dpkg: error processing mythes-en-us (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-help-zh-cn: libreoffice-help-zh-cn depends on libreoffice-writer | language-support-translations-zh; however: Package libreoffice-writer is not configured yet. Package language-support-translations-zh is not installed. dpkg: error processing libreoffice-help-zh-cn (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-base-core: libreoffice-base-core depends on libreoffice-core (= 1:3.5.4-0ubuntu1.1); however: Package libreoffice-core is not configured yet. dpkg: error processing libreoffice-base-core (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-gnome: libreoffice-gnome depends on libreoffice-core (= 1:3.5.4-0ubuntu1.1); however: Package libreoffice-core is not configured yet. dpkg: error processing libreoffice-gnome (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-help-pt-br: libreoffice-help-pt-br depends on libreoffice-writer | language-support-translations-pt; however: Package libreoffice-writer is not configured yet. Package language-support-translations-pt is not installed. dpkg: error processing libreoffice-help-pt-br (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-emailmerge: libreoffice-emailmerge depends on libreoffice-core; however: Package libreoffice-core is not configured yet. dpkg: error processing libreoffice-emailmerge (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-help-en-gb: libreoffice-help-en-gb depends on libreoffice-writer | language-support-translations-en; however: Package libreoffice-writer is not configured yet. Package language-support-translations-en is not installed. dpkg: error processing libreoffice-help-en-gb (--configure): dependency problems - leaving unconfigured

    Read the article

  • Is there any chance that my data will get silently corrupted with a robocopy SMB network transfer?

    - by Archagon
    I'm setting up a NAS box for the first time. At the moment, I have most of my data backed up to a few local hard drives, and I intend to transfer all the data to my NAS over ethernet once the RAID array is setup. Since this is all happening over the network, I'm a bit worried about my data getting corrupted silently during transfer. From what I understand, data generally doesn't get corrupted without notice on local transfers because a checksum is performed at some point by the drive or the OS. (This could be totally wrong.) Does the same thing happen with SMB, or is it up to the transferrer to check the integrity of their data? And if it doesn't happen with SMB, is there a protocol that does ensure data integrity? I know that rsync can checksum a transfer, but I'm on Windows and I already have a robocopy configuration that I like. Will my data be safe or do I have to use an external checksum tool to make sure?

    Read the article

  • Freezes (not crashes) with GCD, blocks and Core Data

    - by Lukasz
    I have recently rewritten my Core Data driven database controller to use Grand Central Dispatch to manage fetching and importing in the background. Controller can operate on 2 NSManagedContext's: NSManagedObjectContext *mainMoc instance variable for main thread. this contexts is used only by quick access for UI by main thread or by dipatch_get_main_queue() global queue. NSManagedObjectContext *bgMoc for background tasks (importing and fetching data for NSFetchedresultsController for tables). This background tasks are fired ONLY by user defined queue: dispatch_queue_t bgQueue (instance variable in database controller object). Fetching data for tables is done in background to not block user UI when bigger or more complicated predicates are performed. Example fetching code for NSFetchedResultsController in my table view controllers: -(void)fetchData{ dispatch_async([CDdb db].bgQueue, ^{ NSError *error = nil; [[self.fetchedResultsController fetchRequest] setPredicate:self.predicate]; if (self.fetchedResultsController && ![self.fetchedResultsController performFetch:&error]) { NSSLog(@"Unresolved error in fetchData %@", error); } if (!initial_fetch_attampted)initial_fetch_attampted = YES; fetching = NO; dispatch_async(dispatch_get_main_queue(), ^{ [self.table reloadData]; [self.table scrollRectToVisible:CGRectMake(0, 0, 100, 20) animated:YES]; }); }); } // end of fetchData function bgMoc merges with mainMoc on save using NSManagedObjectContextDidSaveNotification: - (void)bgMocDidSave:(NSNotification *)saveNotification { // CDdb - bgMoc didsave - merging changes with main mainMoc dispatch_async(dispatch_get_main_queue(), ^{ [self.mainMoc mergeChangesFromContextDidSaveNotification:saveNotification]; // Extra notification for some other, potentially interested clients [[NSNotificationCenter defaultCenter] postNotificationName:DATABASE_SAVED_WITH_CHANGES object:saveNotification]; }); } - (void)mainMocDidSave:(NSNotification *)saveNotification { // CDdb - main mainMoc didSave - merging changes with bgMoc dispatch_async(self.bgQueue, ^{ [self.bgMoc mergeChangesFromContextDidSaveNotification:saveNotification]; }); } NSfetchedResultsController delegate has only one method implemented (for simplicity): - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { dispatch_async(dispatch_get_main_queue(), ^{ [self fetchData]; }); } This way I am trying to follow Apple recommendation for Core Data: 1 NSManagedObjectContext per thread. I know this pattern is not completely clean for at last 2 reasons: bgQueue not necessarily fires the same thread after suspension but since it is serial, it should not matter much (there is never 2 threads trying access bgMoc NSManagedObjectContext dedicated to it). Sometimes table view data source methods will ask NSFetchedResultsController for info from bgMoc (since fetch is done on bgQueue) like sections count, fetched objects in section count, etc.... Event with this flaws this approach works pretty well of the 95% of application running time until ... AND HERE GOES MY QUESTION: Sometimes, very randomly application freezes but not crashes. It does not response on any touch and the only way to get it back to live is to restart it completely (switching back to and from background does not help). No exception is thrown and nothing is printed to the console (I have Breakpoints set for all exception in Xcode). I have tried to debug it using Instruments (time profiles especially) to see if there is something hard going on on main thread but nothing is showing up. I am aware that GCD and Core Data are the main suspects here, but I have no idea how to track / debug this. Let me point out, that this also happens when I dispatch all the tasks to the queues asynchronously only (using dispatch_async everywhere). This makes me think it is not just standard deadlock. Is there any possibility or hints of how could I get more info what is going on? Some extra debug flags, Instruments magical tricks or build setting etc... Any suggestions on what could be the cause are very much appreciated as well as (or) pointers to how to implement background fetching for NSFetchedResultsController and background importing in better way.

    Read the article

  • Tile Engine - Procedural generation, Data structures, Rendering methods - A lot of effort question!

    - by Trixmix
    Isometric Tile and GameObject rendering. To achive the desired looking game I need to take into consideration which tiles need to be drawn first and which last. What I used is a Object that is TileRenderQueue that you would give it a tile list and it will give you a queue on which ones to draw based on their Z coordinate, so that if the Z is higher then it needs to be drawn last. Now if you read above you would know that I want the location data to instead of being stored in the tile instance i want it to be that the index in the array is the location. and then maybe based on the array i could draw the tiles instead of taking a long time in for looping and ordering them by Z. This is the hardest part for me. It's hard for me to find a simple solution to the which one to draw when problem. Also there is the fact that if the X is larger than the gameobject where the X is larger needs to be drawn over the rest of the tiles and so on. Here is an example: All the parts work together to create an efficient engine so its important to me that you would answer all of the parts. I hope you will work on the answers hard just as much that I worked on this question! If there is any unclear part tell me so in the comments! Thanks for the help!

    Read the article

  • NEW 2-Day Instructor Led Course on Oracle Data Mining Now Available!

    - by chberger
    A NEW 2-Day Instructor Led Course on Oracle Data Mining has been developed for customers and anyone wanting to learn more about data mining, predictive analytics and knowledge discovery inside the Oracle Database.  Course Objectives: Explain basic data mining concepts and describe the benefits of predictive analysis Understand primary data mining tasks, and describe the key steps of a data mining process Use the Oracle Data Miner to build,evaluate, and apply multiple data mining models Use Oracle Data Mining's predictions and insights to address many kinds of business problems, including: Predict individual behavior, Predict values, Find co-occurring events Learn how to deploy data mining results for real-time access by end-users Five reasons why you should attend this 2 day Oracle Data Mining Oracle University course. With Oracle Data Mining, a component of the Oracle Advanced Analytics Option, you will learn to gain insight and foresight to: Go beyond simple BI and dashboards about the past. This course will teach you about "data mining" and "predictive analytics", analytical techniques that can provide huge competitive advantage Take advantage of your data and investment in Oracle technology Leverage all the data in your data warehouse, customer data, service data, sales data, customer comments and other unstructured data, point of sale (POS) data, to build and deploy predictive models throughout the enterprise. Learn how to explore and understand your data and find patterns and relationships that were previously hidden Focus on solving strategic challenges to the business, for example, targeting "best customers" with the right offer, identifying product bundles, detecting anomalies and potential fraud, finding natural customer segments and gaining customer insight.

    Read the article

  • How do I express subtle relationships in my data?

    - by Chuck H
    "A" is related to "B" and "C". How do I show that "B" and "C" might, by this context, be related as well? Example: Here are a few headlines about a recent Broadway play: 1 - David Mamet's Glengarry Glen Ross, Starring Al Pacino, Opens on Broadway 2 - Al Pacino in 'Glengarry Glen Ross': What did the critics think? 3 - Al Pacino earns lackluster reviews for Broadway turn 4 - Theater Review: Glengarry Glen Ross Is Selling Its Stars Hard 5 - Glengarry Glen Ross; Hey, Who Killed the Klieg Lights? Problem: Running a fuzzy-string match over these records will establish some relationships, but not others, even though a human reader could pick them out from context in much larger datasets. How do I find the relationship that suggests #3 is related to #4? Both of them can be easily connected to #1, but not to each other. Is there a (Googlable) name for this kind of data or structure? What kind of algorithm am I looking for? Goal: Given 1,000 headlines, a system that automatically suggests that these 5 items are all probably about the same thing. To be honest, it's been so long since I've programmed I'm at a loss how to properly articulate this problem. (I don't know what I don't know, if that makes sense). This is a personal project and I'm writing it in Python. Thanks in advance for any help, advice, and pointers!

    Read the article

  • Hidden Gems: Accelerating Oracle Data Integrator with SOA, Groovy, SDK, and XML

    - by Alex Kotopoulis
    On the last day of Oracle OpenWorld, we had a final advanced session on getting the most out of Oracle Data Integrator through the use of various advanced techniques. The primary way to improve your ODI processes is to choose the optimal knowledge modules for your load and take advantage of the optimized tools of your database, such as OracleDataPump and similar mechanisms in other databases. Knowledge modules also allow you to customize tasks, allowing you to codify best practices that are consistently applied by all integration developers. ODI SDK is another very powerful means to automate and speed up your integration development process. This allows you to automate Life Cycle Management, code comparison, repetitive code generation and change of your integration projects. The SDK is easily accessible through Java or scripting languages such as Groovy and Jython. Finally, all Oracle Data Integration products provide services that can be integrated into a larger Service Oriented Architecture. This moved data integration from an isolated environment into an agile part of a larger business process environment. All Oracle data integration products can play a part in thisracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle Data Integrator allows full control of its runtime sessions through web services, so that integration jobs can become part of business processes. Oracle Data Service Integrator provides a data virtualization layer over your distributed sources, allowing unified reading and updating for heterogeneous data without replicating and moving data. Oracle Enterprise Data Quality provides data quality services to cleanse and deduplicate your records through web services.

    Read the article

  • What's the proper term for a function inverse to a constructor - to unwrap a value from a data type?

    - by Petr Pudlák
    Edit: I'm rephrasing the question a bit. Apparently I caused some confusion because I didn't realize that the term destructor is used in OOP for something quite different - it's a function invoked when an object is being destroyed. In functional programming we (try to) avoid mutable state so there is no such equivalent to it. (I added the proper tag to the question.) Instead, I've seen that the record field for unwrapping a value (especially for single-valued data types such as newtypes) is sometimes called destructor or perhaps deconstructor. For example, let's have (in Haskell): newtype Wrap = Wrap { unwrap :: Int } Here Wrap is the constructor and unwrap is what? The questions are: How do we call unwrap in functional programming? Deconstructor? Destructor? Or by some other term? And to clarify, is this/other terminology applicable to other functional languages, or is it used just in the Haskell? Perhaps also, is there any terminology for this in general, in non-functional languages? I've seen both terms, for example: ... Most often, one supplies smart constructors and destructors for these to ease working with them. ... at Haskell wiki, or ... The general theme here is to fuse constructor - deconstructor pairs like ... at Haskell wikibook (here it's probably meant in a bit more general sense), or newtype DList a = DL { unDL :: [a] -> [a] } The unDL function is our deconstructor, which removes the DL constructor. ... in The Real World Haskell.

    Read the article

  • How to setup matlab for parallel processing on Amazon EC2?

    - by JohnIdol
    I just setup a Extra Large Heavy Computation EC2 instance to throw it at my Genetic Algorithms problem, hoping to speed up things. This instance has 8 Intel Xeon processors (around 2.4Ghz each) and 7 Gigs of RAM. On my machine I have an Intel Core Duo, and matlab is able to work with my two cores just fine. On the EC2 instance though, matlab only is capable of detecting 1 out of 8 processors. Obviously the difference is that I have my 2 cores on a single processor, while the EC2 instance has 8 distinct processors. My question is, how do I get matlab to work with those 8 processors? I found this paper, but it seems related to setting up matlab with multiple EC2 instances, which is not my problem. Any help appreciated!

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

  • Are there any 3rd Party libraries for image processing in C#?

    - by Gaax
    Just wondering if there's anything out there to help make my project a lot easier to deal with. I'm working on a program that uses the Wiimote and infrared pen (mapped to the mouse) to manipulate images in real time and I'd much rather not have to use all my time figuring out how to make the program resize and rotate images efficiently with the least amount of distortion... I haven't really found anything when searching. Anybody know of any libraries that can help me?

    Read the article

  • Dynamic data-entry value store

    - by simendsjo
    I'm creating a data-entry application where users are allowed to create the entry schema. My first version of this just created a single table per entry schema with each entry spanning a single or multiple columns (for complex types) with the appropriate data type. This allowed for "fast" querying (on small datasets as I didn't index all columns) and simple synchronization where the data-entry was distributed on several databases. I'm not quite happy with this solution though; the only positive thing is the simplicity... I can only store a fixed number of columns. I need to create indexes on all columns. I need to recreate the table on schema changes. Some of my key design criterias are: Very fast querying (Using a simple domain specific query language) Writes doesn't have to be fast Many concurrent users Schemas will change often Schemas might contain many thousand columns The data-entries might be distributed and needs syncronization. Preferable MySQL and SQLite - Databases like DB2 and Oracle is out of the question. Using .Net/Mono I've been thinking of a couple of possible designs, but none of them seems like a good choice. Solution 1: Union like table containing a Type column and one nullable column per type. This avoids joins, but will definitly use a lot of space. Solution 2: Key/value store. All values are stored as string and converted when needed. Also use a lot of space, and of course, I hate having to convert everything to string. Solution 3: Use an xml database or store values as xml. Without any experience I would think this is quite slow (at least for the relational model unless there is some very good xpath support). I also would like to avoid an xml database as other parts of the application fits better as a relational model, and being able to join the data is helpful. I cannot help to think that someone has solved (some of) this already, but I'm unable to find anything. Not quite sure what to search for either... I know market research is doing something like this for their questionnaires, but there are few open source implementations, and the ones I've found doesn't quite fit the bill. PSPP has much of the logic I'm thinking of; primitive column types, many columns, many rows, fast querying and merging. Too bad it doesn't work against a database.. And of course... I don't need 99% of the provided functionality, but a lot of stuff not included. I'm not sure this is the right place to ask such a design related question, but I hope someone here has some tips, know of any existing work, or can point me to a better place to ask such a question. Thanks in advance!

    Read the article

  • How to do parrallel processing in Unix Shell script?

    - by Bikram Agarwal
    I have a shell script that transfers a build.xml file to a remote unix machine (devrsp02) and executes the ANT task wldeploy on that machine (devrsp02). Now, this wldeploy task takes around 15 minutes to complete and while this is running, the last line at the unix console is - "task {some digit} initialized". Once this task is complete, we get a "task Completed" msg and the next task in the script is executed only after that. But sometimes, there might be a problem with the weblogic domain and the deployment might be failing internally, with no effect on the status of the wldeploy task. The unix console will still be stuck at "task {some digit} initialized". The error of the deployment will be getting logged in a file called output.a So, what I want now is - Start a time counter before running wldeploy. If the wldeploy runs for more than 15 minutes, the following command should be run - tail -f output.a ## without terminating the wldeploy or cat output.a ## after terminating the wldeploy forcefully Point to be noted here is - I can't run the wldeploy task in background, as in that case the user won't get to know when the task is complete, which is crucial for this script. Could you please suggest anything to achieve this?

    Read the article

  • How do you use Java 1.6 Annotation Processing to perform compile time weaving?

    - by Steve
    I have created an annotation, applied it to a DTO and written a Java 1.6 style annotationProcessor. I can see how to have the annotationProcessor write a new source file, which isn't what I want to do, I cannot see or find out how to have it modify the existing class (ideally just modify the byte code). The modification is actually fairly trivial, all I want the processor to do is to insert a new getter and setter where the name comes from the value of the annotation being processed. My annotation processor looks like this; @SupportedSourceVersion(SourceVersion.RELEASE_6) @SupportedAnnotationTypes({ "com.kn.salog.annotation.AggregateField" }) public class SalogDTOAnnotationProcessor extends AbstractProcessor { @Override public boolean process(final Set<? extends TypeElement> annotations, final RoundEnvironment roundEnv) { //do some stuff } }

    Read the article

  • Is there a suitable replacement for C++, when I would like to write video processing applications?

    - by Nisanio
    Hi I want to write a video editing software, and the "logical" conclusion is that the language I must to use is C++... But I don't like it (sorry c++ fans) I would like to write it with something cool, like Lisp or Haskell or Erlang... But I don't know if the open source implementation of those languages (I don't have money to buy licenses) let me made a competitive software (in the performance area) What do you think? what do you recommend?

    Read the article

  • Parallel processing from a command queue on Linux (bash, python, ruby... whatever)

    - by mlambie
    I have a list/queue of 200 commands that I need to run in a shell on a Linux server. I only want to have a maximum of 10 processes running (from the queue) at once. Some processes will take a few seconds to complete, other processes will take much longer. When a process finishes I want the next command to be "popped" from the queue and executed. Does anyone have code to solve this problem? Further elaboration: There's 200 pieces of work that need to be done, in a queue of some sort. I want to have at most 10 pieces of work going on at once. When a thread finishes a piece of work it should ask the queue for the next piece of work. If there's no more work in the queue, the thread should die. When all the threads have died it means all the work has been done. The actual problem I'm trying to solve is using imapsync to synchronize 200 mailboxes from an old mail server to a new mail server. Some users have large mailboxes and take a long time tto sync, others have very small mailboxes and sync quickly.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >