Search Results

Search found 64086 results on 2564 pages for 'algebraic data types'.

Page 72/2564 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Data structure name: combination array/linked list

    - by me_and
    I have come up with a data structure that combines some of the advantages of linked lists with some of the advantages of fixed-size arrays. It seems very obvious to me, and so I'd expect someone to have thought of it and named it already. Does anyone know what this is called: Take a small fixed-size array. If the number of elements you want to put in your array is greater than the size of the array, add a new array and whatever pointers you like between the old and the new. Thus you have: Static array ————————————————————————— |1|2|3|4|5|6|7|8|9|a|b|c| ————————————————————————— Linked list ———— ———— ———— ———— ———— |1|*->|2|*->|3|*->|4|*->|5|*->NULL ———— ———— ———— ———— ———— My thing: ———————————— ———————————— |1|2|3|4|5|*->|6|7|8|9|a|*->NULL ———————————— ————————————

    Read the article

  • Extracting ID from data packet GPS

    - by user604134
    Hi , I am trying to configure a GPS device to my systems. The GPS device send the data packet to my IP in the following format : $$?W??¬ÿÿÿÿ™U042903.000,A,2839.6408,N,07717.0905,E,0.00,,230111,,,A*7C|1.2|203|0000÷ I am able to extract the latitude, longitude and other information but I am not able to extract the Tracker ID out of the string. According to the manual the ID is in hex format.And the format of the packet is $$\r\n I dont know what to do with it, I have tried converting this to hex..but it didnt work. Any help will be greatly appreciate. Thanks

    Read the article

  • Generating data on unlevel background

    - by bluejay93
    I want to make an unlevel background and then generate some test data on that using Matlab. I was not clear when I asked this question earlier. So for this simple example for i = 1:10 for j = 1:10 f(i,j)=X.^2 + Y.^2 end end where X and Y have been already defined, it plots it on a flat surface. I don't want to distort the function itself, but I want the surface that it goes onto to be unlevel, changed by some degree or something. I hope that's a little clearer.

    Read the article

  • SQL SERVER – PAGEIOLATCH_DT, PAGEIOLATCH_EX, PAGEIOLATCH_KP, PAGEIOLATCH_SH, PAGEIOLATCH_UP – Wait Type – Day 9 of 28

    - by pinaldave
    It is very easy to say that you replace your hardware as that is not up to the mark. In reality, it is very difficult to implement. It is really hard to convince an infrastructure team to change any hardware because they are not performing at their best. I had a nightmare related to this issue in a deal with an infrastructure team as I suggested that they replace their faulty hardware. This is because they were initially not accepting the fact that it is the fault of their hardware. But it is really easy to say “Trust me, I am correct”, while it is equally important that you put some logical reasoning along with this statement. PAGEIOLATCH_XX is such a kind of those wait stats that we would directly like to blame on the underlying subsystem. Of course, most of the time, it is correct – the underlying subsystem is usually the problem. From Book On-Line: PAGEIOLATCH_DT Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Destroy mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_EX Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Exclusive mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_KP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Keep mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_SH Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_UP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Update mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_XX Explanation: Simply put, this particular wait type occurs when any of the tasks is waiting for data from the disk to move to the buffer cache. ReducingPAGEIOLATCH_XX wait: Just like any other wait type, this is again a very challenging and interesting subject to resolve. Here are a few things you can experiment on: Improve your IO subsystem speed (read the first paragraph of this article, if you have not read it, I repeat that it is easy to say a step like this than to actually implement or do it). This type of wait stats can also happen due to memory pressure or any other memory issues. Putting aside the issue of a faulty IO subsystem, this wait type warrants proper analysis of the memory counters. If due to any reasons, the memory is not optimal and unable to receive the IO data. This situation can create this kind of wait type. Proper placing of files is very important. We should check file system for the proper placement of files – LDF and MDF on separate drive, TempDB on separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. It is very possible that there are no proper indexes on the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can significantly reduce lots of CPU, Memory and IO (considering cover index has much lesser columns than cluster table and all other it depends conditions). You can refer to the two articles’ links below previously written by me that talk about how to optimize indexes. Create Missing Indexes Drop Unused Indexes Updating statistics can help the Query Optimizer to render optimal plan, which can only be either directly or indirectly. I have seen that updating statistics with full scan (again, if your database is huge and you cannot do this – never mind!) can provide optimal information to SQL Server optimizer leading to efficient plan. Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All of the discussions of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • How do I make a Data Validation drop-down exclude blanks?

    - by Iszi
    Related: How can I use non-adjacent cells on another sheet for a Data Validation drop-down, and only show non-blank values? For now, I've worked around the above problem by re-arranging my sheet so all the Data Validation Source cells are in one range. I'm leaving the above question open though, because I think it still poses an interesting problem. However, the issue now is that the Data Validation drop-down isn't working in the way I expected it to (and how I believe others are telling me it should). Even though I've got everything into one named range, Excel still shows blanks in a drop-down that references that range. Setup: Sheet 1 A1= (blank) B1= Header A2= 1 B2= Value1 A3= 2 B3= Value2 A4= 3 B4= Value3 A5= 4 B5= (empty) A6= 5 B6= (empty) A7= 6 B7= (empty) Sheet1!B2:B7 is named Validation Sheet2!A1 is set to use Data Validation with a Source =Validation, and in-cell drop-down. The drop-down in Sheet2!A1 shows: Value1 Value2 Value3 . . . (Dots represent blank lines) How can I get rid of these blank lines in the in-cell drop-down, while still including Sheet1!B5:B7 in the Data Validation Source? Note: I nuked the sheet, and tried it again without column A from Sheet1 (putting values from column B in the above example into column A), and it worked fine. Adding Column A back though, brought the blanks back into the Data Validation drop-down. What do I need to do to keep column A as I want it and keep the in-cell drop-down clean?

    Read the article

  • Transparent Data Encryption Helps Customers Address Regulatory Compliance

    - by Troy Kitch
    Regulations such as the Payment Card Industry Data Security Standards (PCI DSS), U.S. state security breach notification laws, HIPAA HITECH and more, call for the use of data encryption or redaction to protect sensitive personally identifiable information (PII). From the outset, Oracle has delivered the industry's most advanced technology to safeguard data where it lives—in the database. Oracle provides a comprehensive portfolio of security solutions to ensure data privacy, protect against insider threats, and enable regulatory compliance for both Oracle and non-Oracle Databases. Organizations worldwide rely on Oracle Database Security solutions to help address industry and government regulatory compliance. Specifically, Oracle Advanced Security helps organizations like Educational Testing Service, TransUnion Interactive, Orbitz, and the National Marrow Donor Program comply with privacy and regulatory mandates by transparently encrypting sensitive information such as credit cards, social security numbers, and personally identifiable information (PII). By encrypting data at rest and whenever it leaves the database over the network or via backups, Oracle Advanced Security provides organizations the most cost-effective solution for comprehensive data protection. Watch the video and learn why organizations choose Oracle Advanced Security with transparent data encryption.

    Read the article

  • Adventures in Windows 8: Understanding and debugging design time data in Expression Blend

    - by Laurent Bugnion
    One of my favorite features in Expression Blend is the ability to attach a Visual Studio debugger to Blend. First let’s start by answering the question: why exactly do you want to do that? Note: If you are familiar with the creation and usage of design time data, feel free to scroll down to the paragraph titled “When design time data fails”. Creating design time data for your app When a designer works on an app, he needs to see something to design. For “static” UI such as buttons, backgrounds, etc, the user interface elements are going to show up in Blend just fine. If however the data is fetched dynamically from a service (web, database, etc) or created dynamically, most probably Blend is going to show just an empty element. The classical way to design at that stage is to run the application, navigate to the screen that is under construction (which can involve delays, need to log in, etc…), to measure what is on the screen (colors, margins, width and height, etc) using various tools, going back to Blend, editing the properties of the elements, running again, etc. Obviously this is not ideal. The solution is to create design time data. For more information about the creation of design time data by mocking services, you can refer to two talks of mine “Deep dive MVVM” and “MVVM Applied From Silverlight to Windows Phone to Windows 8”. The source code for these talks is here and here. Design time data in MVVM Light One of the main reasons why I developed MVVM Light is to facilitate the creation of design time data. To illustrate this, let’s create a new MVVM Light application in Visual Studio. Install MVVM Light from here: http://mvvmlight.codeplex.com (use the MSI in the Download section). After installing, make sure to read the Readme that opens up in your favorite browser, you will need one more step to install the Project Templates. Start Visual Studio 2012. Create a new MvvmLight (Win8) app. Run the application. You will see a string showing “Welcome to MVVM Light”. In the Solution explorer, right click on MainPage.xaml and select Open in Blend. Now you should see “Welcome to MVVM Light [Design]” What happens here is that Expression Blend runs different code at design time than the application runs at runtime. To do this, we use design-time detection (as explained in a previous article) and use that information to initialize a different data service at design time. To understand this better, open the ViewModelLocator.cs file in the ViewModel folder and see how the DesignDataService is used at design time, while the DataService is used at runtime. In a real-life applicationm, DataService would be used to connect to a web service, for instance. When design time data fails Sometimes however, the creation of design time data fails. It can be very difficult to understand exactly what is happening. Expression Blend is not giving a lot of information about what happened. Thankfully, we can use a trick: Attaching a debugger to Expression Blend and debug the design time code. In WPF and Silverlight (including Windows Phone 7), you could simply attach the debugger to Blend.exe (using the “Managed (v4.5, v4.0) code” option even for Silverlight!!) In Windows 8 however, things are just a bit different. This is because the designer that renders the actual representation of the Windows 8 app runs in its own process. Let’s illustrate that: Open the file DesignDataService in the Design folder. Modify the GetData method to look like this: public void GetData(Action<DataItem, Exception> callback) { throw new Exception(); // Use this to create design time data var item = new DataItem("Welcome to MVVM Light [design]"); callback(item, null); } Go to Blend and build the application. The build succeeds, but now the page is empty. The creation of the design time data failed, but we don’t get a warning message. We need to investigate what’s wrong. Close MainPage.xaml Go to Visual Studio and select the menu Debug, Attach to Process. Update: Make sure that you select “Managed (v4.5, v4.0) code” in the “Attach to” field. Find the process named XDesProc.exe. You should have at least two, one for the Visual Studio 2012 designer surface, and one for Expression Blend. Unfortunately in this screen it is not obvious which is which. Let’s find out in the Task Manager. Press Ctrl-Alt-Del and select Task Manager Go to the Details tab and sort the processes by name. Find the one that says “Blend for Microsoft Visual Studio 2012 XAML UI Designer” and write down the process ID. Go back to the Attach to Process dialog in Visual Studio. sort the processes by ID and attach the debugger to the correct instance of XDesProc.exe. Open the MainViewModel (in the ViewModel folder) Place a breakpoint on the first line of the MainViewModel constructor. Go to Blend and open the MainPage.xaml again. At this point, the debugger breaks in Visual Studio and you can execute your code step by step. Simply step inside the dataservice call, and find the exception that you had placed there. Visual Studio gives you additional information which helps you to solve the issue. More info and Conclusion I want to thank the amazing people on the Expression Blend team for being very fast in guiding me in that matter and encouraging me to blog about it. More information about the XDesProc.exe process can be found here. I had to work on a Windows 8 app for a few days without design time data because of an Exception thrown somewhere in the code, and it was really painful. With the debugger, finding the issue was a simple matter of stepping into the code until it threw the exception.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • testdisk - recover partition table

    - by Evaggelos Balaskas
    I destroyed my partition table of my laptop. Testdisk reports the below Disk laptop.img - 250 GB / 232 GiB - CHS 30402 255 63 (RO) Partition Start End Size in sectors >P MS Data 435868 456606 20739 [NO NAME] P MS Data 19232600 19235479 2880 [NO NAME] D MS Data 41945087 83890143 41945057 D MS Data 57151486 168579069 111427584 D MS Data 67637246 141037565 73400320 D MS Data 151523326 193466365 41943040 D MS Data 170617328 170618223 896 D MS Data 170631168 170634047 2880 D MS Data 171338232 171344405 6174 [Boot] D MS Data 172008235 172231918 223684 [NO NAME] P MS Data 193466368 214437887 20971520 D MS Data 217321375 225321678 8000304 [root] D MS Data 224923646 308809725 83886080 [media] D MS Data 308809728 420237311 111427584 D MS Data 418910206 481824765 62914560 [vmimages] my partition table had 3 Primary Partitions. 1. WinXP Home 2. /boot 3. LVM inside LVM i had 9 or 10 LVM partitions One of them was my home (encrypted with luks) testdisk cant recover my partition table or any other partition. Partitions with [P] doesnt have any useful data. I want to use dd to extract the partitions and try to recover as many files i can. Any ideas of how i can extract eg. the [root] lvm partition from the above testdisk report ? I am afraid that my disk was also corrupted.

    Read the article

  • Oracle Data Integrator Demo Webcast - Next Webcast - November 21st, 2013

    - by Javier Puerta
    Oracle Data Integrator Demo Webcast Next Webcast - November 21st, 2013 The ODI Product Management team will be hosting a demonstration webcast of Oracle Data Integrator regularly. We will be showing baseline functionality, and covering special topics as requested by our customers. Attendance to these webcasts is open to customers and partners Webcast Format The same format for the Webcast will be followed for each presentation: 05 minutes - Background & Overview 30 minutes - Introduction to ODI Features 15 minutes - Drill-Down into Special Topics 10 minutes - Questions and Answers Next Webcast Special Topics Oracle Data Integrator 12c Webcast Details Thursday November 21st 2013, 10:00 AM PST | 1:00 PM EST | 6:00 PM CET (1 hour) Web Conference Link: 594 942 837 (https://oracleconferencing.webex.com) Dial-In Number: AMER: 1-866-682-4770 (More Numbers) Phone Meeting ID/Passcode: 3096713/505638 More information on Oracle Data Integrator (ODI) Learn more about Oracle Data Integrator. Download Oracle Data Integrator 12c. Oracle Data Integrator Webcast Archive Copyright © 2013, Oracle. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • Contricted A* problem

    - by Ragekit
    I've got a little problem with an A* algorithm that I need to constrict a little bit. Basically : I use an A* to find the shortest path between 2 randomly placed room in 3D space, and then build a corridor between them. The problem I found is that sometimes it makes chimney like corridors that are not ideal, so I constrict the A* so that if the last movement was up or down, you go sideways. Everything is fine, but in some corner cases, it fails to find a path (when there is obviously one). Like here between the blue and red dot : (i'm in unity btw, but i don't think it matters) Here is the code of the actual A* (a bit long, and some redundency) while(current != goal) { //add stair up / stair down foreach(Node<GridUnit> test in current.Neighbors) { if(!test.Data.empty && test != goal) continue; //bug at arrival; if(test == goal && penul !=null) { Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(currentDiff.y,0)) { //wanna drop on the last if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,to.Data.bounds.center)) { continue; } else { if(Mathf.Approximately(to.Data.bounds.center.x, current.Data.parentUnit.bounds.center.x) && Mathf.Approximately(to.Data.bounds.center.z, current.Data.parentUnit.bounds.center.z)) { continue; } } } } if(current.Data.parentUnit != null) { Vector3 previousDiff = current.Data.parentUnit.bounds.center - current.Data.bounds.center; Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(previousDiff.y,0)) { if(!Mathf.Approximately(currentDiff.y,0)) { //you wanna drop now : continue; } if(current.Data.parentUnit.parentUnit != null) { if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,current.Data.parentUnit.parentUnit.bounds.center)) { continue; }else { if(Mathf.Approximately(test.Data.bounds.center.x, current.Data.parentUnit.parentUnit.bounds.center.x) && Mathf.Approximately(test.Data.bounds.center.z, current.Data.parentUnit.parentUnit.bounds.center.z)) { continue; } } } } } g = current.Data.g + HEURISTIC(current.Data,test.Data); h = HEURISTIC(test.Data,goal.Data); f = g + h; if(open.Contains(test) || closed.Contains(test)) { if(test.Data.f > f) { //found a shorter path going passing through that point test.Data.f = f; test.Data.g = g; test.Data.h = h; test.Data.parentUnit = current.Data; } } else { //jamais rencontré test.Data.f = f; test.Data.h = h; test.Data.g = g; test.Data.parentUnit = current.Data; open.Add(test); } } closed.Add (current); if(open.Count == 0) { Debug.Log("nothingfound"); //nothing more to test no path found, stay to from; List<GridUnit> r = new List<GridUnit>(); r.Add(from.Data); return r; } //sort open from small to biggest travel cost open.Sort(delegate(Node<GridUnit> x, Node<GridUnit> y) { return (int)(x.Data.f-y.Data.f); }); //get the smallest travel cost node; Node<GridUnit> smallest = open[0]; current = smallest; open.RemoveAt(0); } //build the path going backward; List<GridUnit> ret = new List<GridUnit>(); if(penul != null) { ret.Insert(0,to.Data); } GridUnit cur = goal.Data; ret.Insert(0,cur); do{ cur = cur.parentUnit; ret.Insert(0,cur); } while(cur != from.Data); return ret; You see at the start of the foreach i constrict the A* like i said. If you have any insight it would be cool. Thanks

    Read the article

  • Extracting, Transforming, and Loading (ETL) Process

    The process of Extracting, Transforming, and Loading data in to a data warehouse is called Extract Transform Load (ETL) process.  This process can be used to obtain, analyze, and clean data from various data sources so that it can be stored in a uniform manner within a data warehouse. This data can then be used by various business intelligence processes to provide an organization with more of an in depth analysis of the current state of the company and where it is heading. A standard ETL process that might be used by a health care system may include importing all of their patients names, diagnoses and prescriptions in to a unified data warehouse so that trends can be spotted in regards to outbreaks like the flu and also predict potential illness that a patient might be affected by based on other patients with similar symptoms.

    Read the article

  • iPhone core data problem : referenceData64 only defined for abstract class

    - by occe
    I have an application that downloads/parses a big XML file and store the information using core data (approx. 4000 objects (entities)). The XML is loaded/parsed in a different thread, which has its own NSManagedObjectContext. When trying to save the entities to the persistent store, I sometimes get the following error (about 20%) 2010-03-03 23:41:42.802 xxx[7487:4203] Exception in XML saving 2010-03-03 23:41:42.802 xxx[7487:4203] Description: * -_referenceData64 only defined for abstract class. Define -[NSTemporaryObjectID_default _referenceData64]! 2010-03-03 23:41:42.803 xxx[7487:4203] Name: NSInvalidArgumentException 2010-03-03 23:41:42.804 xxx[7487:4203] UserInfo: (null) 2010-03-03 23:41:42.805 xxx[7487:4203] Reason: * -_referenceData64 only defined for abstract class. Define -[NSTemporaryObjectID_default _referenceData64]! I have a simple integer to keep track of the entities the application creates compared to the insertedObjects property in the NSManagedObjectContext before saving, and when I get the error, these numbers do not match, insertedObjects in the NSManagedObjectContext is missing about 10 entities. I do not know how I should continue to investigate this problem, anyone has any idea how to fix this? Thanks /oscar

    Read the article

  • How to create this MongoMapper custom data type?

    - by Kapslok
    I'm trying to create a custom MongoMapper data type in RoR 2.3.5 called Translatable: class Translatable < String def initialize(translation, culture="en") end def languages end def has_translation(culture)? end def self.to_mongo(value) end def self.from_mongo(value) end end I want to be able to use it like this: class Page include MongoMapper::Document key :title, Translatable, :required => true key :content, String end Then implement like this: p = Page.new p.title = "Hello" p.title(:fr) = "Bonjour" p.title(:es) = "Hola" p.content = "Some content here" p.save p = Page.first p.languages => [:en, :fr, :es] p.has_translation(:fr) => true en = p.title => "Hello" en = p.title(:en) => "Hello" fr = p.title(:fr) => "Bonjour" es = p.title(:es) => "Hola" In mongoDB I imagine the information would be stored like: { "_id" : ObjectId("4b98cd7803bca46ca6000002"), "title" : { "en" : "Hello", "fr" : "Bonjour", "es" : "Hola" }, "content" : "Some content here" } So Page.title is a string that defaults to English (:en) when culture is not specified. I would really appreciate any help.

    Read the article

  • Algorithm for converting hierarchical flat data (w/ ParentID) into sorted flat list w/ indentation l

    - by eagle
    I have the following structure: MyClass { guid ID guid ParentID string Name } I'd like to create an array which contains the elements in the order they should be displayed in a hierarchy (e.g. according to their "left" values), as well as a hash which maps the guid to the indentation level. For example: ID Name ParentID ------------------------ 1 Cats 2 2 Animal NULL 3 Tiger 1 4 Book NULL 5 Airplane NULL This would essentially produce the following objects: // Array is an array of all the elements sorted by the way you would see them in a fully expanded tree Array[0] = "Airplane" Array[1] = "Animal" Array[2] = "Cats" Array[3] = "Tiger" Array[4] = "Book" // IndentationLevel is a hash of GUIDs to IndentationLevels. IndentationLevel["1"] = 1 IndentationLevel["2"] = 0 IndentationLevel["3"] = 2 IndentationLevel["4"] = 0 IndentationLevel["5"] = 0 For clarity, this is what the hierarchy looks like: Airplane Animal Cats Tiger Book I'd like to iterate through the items the least amount of times possible. I also don't want to create a hierarchical data structure. I'd prefer to use arrays, hashes, stacks, or queues. The two objectives are: Store a hash of the ID to the indentation level. Sort the list that holds all the objects according to their left values. When I get the list of elements, they are in no particular order. Siblings should be ordered by their Name property. Update: This may seem like I haven't tried coming up with a solution myself and simply want others to do the work for me. However, I have tried coming up with three different solutions, and I've gotten stuck on each. One reason might be that I've tried to avoid recursion (maybe wrongly so). I'm not posting the partial solutions I have so far since they are incorrect and may badly influence the solutions of others.

    Read the article

  • Marshal.StringToCoTaskMemAnsi converting non-Latin characters when sending raw data to a printer

    - by rem
    For sending raw data to a thermal DATAMAX printer I'm using RawPrinterHelper class from this Microsoft KB article. When a string sent to printer contains only Latin characters, everything is OK. But non-Latin, in my case Russian characters in a string, are not printed correct. I think the problem is in using Marshal.StringToCoTaskMemAnsi method for converting the string: public static bool SendStringToPrinter(string szPrinterName, string szString) { IntPtr pBytes; Int32 dwCount; // How many characters are in the string? dwCount = szString.Length; // Assume that the printer is expecting ANSI text, and then convert // the string to ANSI text. pBytes = Marshal.StringToCoTaskMemAnsi(szString); // Send the converted ANSI string to the printer. SendBytesToPrinter(szPrinterName, pBytes, dwCount); Marshal.FreeCoTaskMem(pBytes); return true; } Just to note, Russian characters in the string are put in hex format, like "\x83", but nevertheless the method doesn't put this hex value in unmanaged memory as it is, but converts it, I think, according with ANSI code page to a character and then printer can not read it correctly. If I try to compose a file, using Hex editor and put correct hex values in place of non-Latin characters and then send the file to a printer using another method from the same class SendFileToPrinter, everything, including Russian characters is printed correctly. How in this case the problem with sending string, containing non-Latin characters, could be solved?

    Read the article

  • How to fetch distinct values in Core Data?

    - by Andy
    So in looking through Core Data Snippets, I found the following code: ... [request setEntity:entity]; [request setResultType:NSDictionaryResultType]; [request setReturnsDistinctValues:YES]; [request setPropertiesToFetch:[NSArray arrayWithObject:@"<#Attribute name#>"]]; // Execute the fetch NSError *error; id requestedValue = nil; // WTF? This isn't defined or used anywhere NSArray *objects = [managedObjectContext executeFetchRequest:request error:&error]; if (objects == nil) { // handle the error } This is great and seems perfect for what I need...but how does one actually use it? I assume since it's returning dictionaries, I need a key to get at the values - but where's the key defined? Is that the "id requestedValue = nil" line? If so, how does "requestedValue" become the key? Xcode gives me a compiler warning about an unused variable at the "requestedValue" declaration. I feel like I'm missing something here. Thanks in advance for any assistance you can offer.

    Read the article

  • Substitute values (for specific dates) from a second data frame to the first data frame

    - by user1665355
    I have two time series data frames: The first one: head(df1) : GMT MSCI ACWI DJGlbl Russell 1000 Russell Dev S&P GSCI Industrial S&P GSCI Precious 1999-03-01 -0.7000000 0.2000000 -0.1000000 -1.5000000 -1.0000000 -0.4000000 1999-03-02 -0.5035247 0.0998004 -0.7007007 -0.2030457 0.4040404 -0.3012048 1999-03-03 -0.2024291 0.2991027 0.0000000 -0.6103764 0.1006036 -0.1007049 1999-03-04 0.7099391 0.2982107 1.5120968 -0.1023541 0.5025126 0.4032258 1999-03-05 2.4169184 0.8919722 2.1847071 2.7663934 -1.2000000 0.0000000 1999-03-08 0.3933137 0.3929273 0.5830904 -0.0997009 -0.2024291 1.1044177 tail(df1) : GMT MSCI ACWI DJGlbl Russell 1000 Russell Dev S&P GSCI Industrial S&P GSCI Precious 2011-12-23 0.68241470 0.84790673 0.9441385 0.6116208 0.5822862 -0.2345300 2011-12-26 -0.05213764 0.00000000 0.0000000 0.0000000 0.0000000 0.0000000 2011-12-27 0.20865936 0.05254861 0.3117693 0.2431611 0.0000000 -0.7233273 2011-12-28 -0.62467465 -1.20798319 -1.1655012 -0.9702850 -2.0414381 -2.4043716 2011-12-29 0.52383447 0.47846890 0.8647799 0.5511329 -0.0933126 -1.2504666 2011-12-30 0.26055237 1.03174603 -0.4676539 1.2180268 1.9613948 1.7388017 The second one: head(df2) : GMT MSCI.ACWI DJGlbl Russell.1000 Russell.Dev S.P.GSCI.Industrial S.P.GSCI.Precious 1999-06-01 0.00000000 0.24438520 0.0000000 0 -0.88465521 0.008522842 1999-07-01 0.12630441 0.06755621 0.0000000 0 0.29394697 0.000000000 1999-08-02 0.07441812 0.18922829 0.0000000 0 0.02697299 -0.107155063 1999-09-01 -0.36952701 0.08684107 0.1117509 0 0.24520976 0.000000000 1999-10-01 0.00000000 0.00000000 0.0000000 0 0.00000000 1.941266205 1999-11-01 0.41879925 0.00000000 0.0000000 0 0.00000000 -0.197897901 tail(df2) : GMT MSCI.ACWI DJGlbl Russell.1000 Russell.Dev S.P.GSCI.Industrial S.P.GSCI.Precious 2011-07-01 0.00000000 0.0000000 0.0000000 0.0000000 0.00000000 -0.1141162 2011-08-01 0.00000000 0.0000000 0.0000000 0.0000000 0.02627347 0.0000000 2011-09-01 -0.02470873 0.2977585 -0.0911891 0.6367605 0.00000000 0.2830977 2011-10-03 0.42495188 0.0000000 0.4200743 -0.4420027 -0.41012646 0.0000000 2011-11-01 0.00000000 0.0000000 0.0000000 -0.6597739 0.00000000 0.0000000 2011-12-01 0.50273034 0.0000000 0.0000000 0.6476393 0.00000000 0.0000000 The first df cointains daily observations. The second df contains only the "first day of each month" forecasted values. I would like to substitute the values from the second df into the first one. In other words, the "first day of each month" values in the first df will be substituted for the "first day of each month" values from the second df. I tried to write an lapply loop that substitutes the values and was only trying to use match function. But I failed. I could not find the similar question at StackOverflow either... Greatful for any suggestions!

    Read the article

  • What are the correct bindings for an NSComboBox for use with Core Data

    - by theMikeSwan
    Imagine if you will a Core Data app with two entities (Employee, and Department). Employees have a to-one relationship with department (department) and the inverse is a to-many relationship (employees). In the UI you can select individual Employee entities and edit the details in a detail area (there are of course other attributes and there is UI for adding and editing Department entities). When using a popup button the bindings are: content = PopUpArrayController.arrangedObjects content values = PopUpArrayController.arrangedObjects.name (name is an NSString) selected object = EmployeeArrayController.selection.department.name This allows for viewing of all departments in the popup menu, correct selection of the current Employee's department, and allows that department to be changed as expected. The goal is to change this for an NSComboBox so that the user can tab to the box and type the department name in without switching to the mouse. I have tried numerous different bindings to accomplish this. I even had it work for one run with these bindings: content = PopUpArrayController.arrangedObjects.name value = EmployeeArrayController.selection.department.name At least once this worked as expected (it even added a new department when the entered text did not match any existing department). Now however it will display the available Departments and auto complete but will not update the model with the correct value when the value is changed in the combo box. If the Department is set or changed with the popup the correct department is shown in the combo box. Does anyone know what I am missing? Thanks.

    Read the article

  • LINQ Normalizing data

    - by Brennan Mann
    I am using an OMS that stores up to three line items per record in the database. Below is an example of an order containing five line items. Order Header Order Detail Prod 1 Prod 2 Prod 3 Order Detail Prod 4 Prod 5 One order header record and two detail records. My goal is have a one to one relation for details records(i.e., one detail record per line item). In the past, I used an UNION ALL SQL statement to extract the data. Is there a better approcach to this problem using LINQ? Below is my first attempt at using LINQ. Any feedback, suggestions or recommendations would greatly be appreciated. For what I have read, an UNION statement can tax the process? var orderdetail = (from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_001, price = o.EXTPRICES_001, qty = o.ITEMQTYS_001 } ).Union(from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_002, price = o.EXTPRICES_002, qty = o.ITEMQTYS_002 } ).Union(from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_003, price = o.EXTPRICES_003, qty = o.ITEMQTYS_003 });

    Read the article

  • Core Data NSPredicate to filter results

    - by Bryan
    I have a NSManagedObject that contains a bID and a pID. Within the set of NSManagedObjects, I only want a subset returned and I'm struggling to find the correct NSPredicate or way to get what I need out of Core Data. Here's my full list: bid pid 41 0 42 41 43 0 44 0 47 41 48 0 49 0 50 43 There is a parent-child relationship above. Rules: If a record's PID = 0, it means that that record IS a parent record. If a record's PID != 0, then that record's PID refers to it's parent record's BID. Example: 1) BID = 41 is a parent record. Why? Because records BID=42 and record BID=47 have PID's of 41, meaning those are children of its PID record. 2) BID = 42 has a parent record with a BID = 41. 3) BID = 43 is a parent record. 4) BID = 44 is a parent record. 5) BID = 47 has a parent record with a BID = 41 because its PID = 41. See #1 above. 6) BID = 48 is a parent record. 7) BID = 49 is a parent record. 8) BID = 50 is a child record, and its parent record has a BID = 43. See the pattern? Now, basically from that, I want only the following rows fetched: bid pid 44 0 47 41 48 0 49 0 50 43 BID = 41, BID = 48, BID = 49 should all be returned because there are no records with a PID equal to their BID. BID = 47 should be returned because it is the most recent child of PID = 41. BID = 50 should be returned because it is the most recent child of PID = 43. Hope this helps explain it more.

    Read the article

  • How should I go about implementing a points-to analysis in Maude?

    - by reprogrammer
    I'm going to implement a points-to analysis algorithm. I'd like to implement this analysis mainly based on the algorithm by Whaley and Lam. Whaley and Lam use a BDD based implementation of Datalog to represent and compute the points-to analysis relations. The following lists some of the relations that are used in a typical points-to analysis. Note that D(w, z) :- A(w, x),B(x, y), C(y, z) means D(w, z) is true if A(w, x), B(x, y), and C(y, z) are all true. BDD is the data structure used to represent these relations. Relations input vP0 (variable : V, heap : H) input store (base : V, field : F, source : V) input load (base : V, field : F, dest : V) input assign (dest : V, source : V) output vP (variable : V, heap : H) output hP (base : H, field : F, target : H) Rules vP(v, h) :- vP0(v, h) vP(v1, h) :- assign(v1, v2), vP(v2, h) hP(h1, f,h2) :- store(v1, f, v2), vP(v1, h1), vP(v2, h2) vP(v2, h2) :- load(v1, f, v2), vP(v1, h1), hP(h1, f, h2) I need to understand if Maude is a good environment for implementing points-to analysis. I noticed that Maude uses a BDD library called BuDDy. But, it looks like that Maude uses BDDs for a different purpose, i.e. unification. So, I thought I might be able to use Maude instead of a Datalog engine to compute the relations of my points-to analysis. I assume Maude propagates independent information concurrently. And this concurrency could potentially make my points-to analysis faster than sequential processing of rules. But, I don't know the best way to represent my relations in Maude. Should I implement BDD in Maude myself, or Maude's internal unification based on BDD has the same effect?

    Read the article

  • haskell: a data structure for storing ascending integers with a very fast lookup

    - by valya
    Hello! (This question is related to my previous question, or rather to my answer to it.) I want to store all qubes of natural numbers in a structure and look up specific integers to see if they are perfect cubes. For example, cubes = map (\x -> x*x*x) [1..] is_cube n = n == (head $ dropWhile (<n) cubes) It is much faster than calculating the cube root, but It has complexity of O(n^(1/3)) (am I right?). I think, using a more complex data structure would be better. For example, in C I could store a length of an already generated array (not list - for faster indexing) and do a binary search. It would be O(log n) with lower ?oefficient than in another answer to that question. The problem is, I can't express it in Haskell (and I don't think I should). Or I can use a hash function (like mod). But I think it would be much more memory consuming to have several lists (or a list of lists), and it won't lower the complexity of lookup (still O(n^(1/3))), only a coefficient. I thought about a kind of a tree, but without any clever ideas (sadly I've never studied CS). I think, the fact that all integers are ascending will make my tree ill-balanced for lookups. And I'm pretty sure this fact about ascending integers can be a great advantage for lookups, but I don't know how to use it properly (see my first solution which I can't express in Haskell).

    Read the article

  • Core data relationship memory leak

    - by cfihelp
    I have a strange (to me) memory leak when accessing an entity in a relationship. Series and Tiles have an inverse relationship to each other. // set up the fetch request NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Series" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // grab all of the series in the core data store NSError *error = nil; availableSeries = [[NSArray alloc] initWithArray:[managedObjectContext executeFetchRequest:fetchRequest error:&error]]; [fetchRequest release]; // grab one of the series Series *currentSeries = [availableSeries objectAtIndex:1]; // load all of the tiles attached to the series through the relationship NSArray *myTiles = [currentSeries.tile allObjects]; // 16 byte leak here! Instruments reports back that the final line has a 16 byte leak cause by NSPlaceHolderString. Stack trace: 2 UIKit UIApplicationMain 3 UIKit -[UIApplication _run] 4 CoreFoundation CFRunLoopRunInMode 5 CoreFoundation CFRunLoopRunSpecific 6 GraphicsServices PurpleEventCallback 7 UIKit _UIApplicationHandleEvent 8 UIKit -[UIApplication sendEvent:] 9 UIKit -[UIApplication handleEvent:withNewEvent:] 10 UIKit -[UIApplication _runWithURL:sourceBundleID:] 11 UIKit -[UIApplication _performInitializationWithURL:sourceBundleID:] 12 Memory -[AppDelegate_Phone application:didFinishLaunchingWithOptions:] /Users/cfish/svnrepo/Memory/src/Memory/iPhone/AppDelegate_Phone.m:49 13 UIKit -[UIViewController view] 14 Memory -[HomeScreenController_Phone viewDidLoad] /Users/cfish/svnrepo/Memory/src/Memory/iPhone/HomeScreenController_Phone.m:58 15 CoreData -[_NSFaultingMutableSet allObjects] 16 CoreData -[_NSFaultingMutableSet willRead] 17 CoreData -[NSFaultHandler retainedFulfillAggregateFaultForObject:andRelationship:withContext:] 18 CoreData -[NSSQLCore retainedRelationshipDataWithSourceID:forRelationship:withContext:] 19 CoreData -[NSSQLCore newFetchedPKsForSourceID:andRelationship:] 20 CoreData -[NSSQLCore rawSQLTextForToManyFaultStatement:stripBindVariables:swapEKPK:] 21 Foundation +[NSString stringWithFormat:] 22 Foundation -[NSPlaceholderString initWithFormat:locale:arguments:] 23 CoreFoundation _CFStringCreateWithFormatAndArgumentsAux 24 CoreFoundation _CFStringAppendFormatAndArgumentsAux 25 Foundation _NSDescriptionWithLocaleFunc 26 CoreFoundation -[NSObject respondsToSelector:] 27 libobjc.A.dylib class_respondsToSelector 28 libobjc.A.dylib lookUpMethod 29 libobjc.A.dylib _cache_addForwardEntry 30 libobjc.A.dylib _malloc_internal I think I'm missing something obvious but I can't quite figure out what. Thanks for your help! Update: I've copied the offending chunk of code to the first part of applicationDidFinishLaunching and it still leaks. Could there be something wrong with my model?

    Read the article

  • Backwards compatibility when using Core Data

    - by Alex
    Could anybody shed some light as to why is my app crashing with the following error on iPhone OS 2.2.1 dyld: Symbol not found: _OBJC_CLASS_$_NSPredicate Referenced from: /var/mobile/Applications/456F243F-468A-4969-9BB7-A4DF993AE89C/AppName.app/AppName Expected in: /System/Library/Frameworks/Foundation.framework/Foundation I have weak linked CoreData.framework, and have the Base SDK set to 3.0 and Deployment Target set to SDK 2.2 The app already uses other 3.0 features when available and I did not have any problems with those. But apparently the backward-compatibility methods used for other features do not work with Core Data. The app crashes before app delegate's applicationDidFinishLaunching gets called. Here's the debugger log: [Session started at 2010-05-25 20:17:03 -0400.] GNU gdb 6.3.50-20050815 (Apple version gdb-1119) (Thu May 14 05:35:37 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys001 Loading program into debugger… sharedlibrary apply-load-rules all warning: Unable to read symbols from "MessageUI" (not yet mapped into memory). warning: Unable to read symbols from "CoreData" (not yet mapped into memory). Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-12038-42 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none run Running… [Switching to thread 10755] [Switching to thread 10755] Re-enabling shared library breakpoint 1 Re-enabling shared library breakpoint 2 Re-enabling shared library breakpoint 3 Re-enabling shared library breakpoint 4 Re-enabling shared library breakpoint 5 (gdb) continue warning: Unable to read symbols for ""/Users/alex/iPhone Projects/AppName/build/Debug-iphoneos"/AppName.app/AppName" (file not found). dyld: Symbol not found: _OBJC_CLASS_$_NSPredicate Referenced from: /var/mobile/Applications/456F243F-468A-4969-9BB7-A4DF993AE89C/AppName.app/AppName Expected in: /System/Library/Frameworks/Foundation.framework/Foundation (gdb)

    Read the article

  • Where to put data management rules for complex data validation in ASP.NET MVC?

    - by TheRHCP
    Hello, I am currently working on an ASP.NET MVC2 project. This is the first time I am working on a real MVC web application. The ASP.NET MVC website really helped me to get started really fast, but I still have some obscure knowledge concerning datamodel validation. My problem is that I do not really know where to manage my filled datamodel when it comes to complex validation rules. For example, validating a string field with a Regex is quite easy and I know that I just have to decorate my field with a specific attribute, so data management rules are implemented in the model. But if I have multiple fields that I need to validate which each other, for example multiple datetime that need to be correctly set following a specific time rule, where do I need to validate them? I know that I could create my own validation attributes, but sometimes validation ask a specific validation path which is to complex to be validated using attributes. This first question also leads me to a related question which is, is it right to validate a model in the controller? Because for the moment that is the only way I found for complex validation. But I find this a bit dirty and I feel it does not really fit a the controller role and much harder to test (multiple code path). Thanks.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >