Search Results

Search found 18191 results on 728 pages for 'single board'.

Page 575/728 | < Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >

  • Spaces and backslashes in Visual Studio build events

    - by gencha
    I have an application that is supposed to aid my project in terms of pre- and post-build event handling. I'm using ndesk.options for command line argument parsing. Which gave me weird results when my project path contains spaces. I thought this was the fault of ndesk.options but I guess my own application is to blame. I call my application as a post-built event like so: build.exe --in="$(ProjectDir)" --out="c:\out\" A simple foreach over args[] displays the following: --in=c:\my project" --out=c:\out" What happened is that the last " in each parameter was treated as if it was escaped. Thus the trailing backslash was removed. And the whole thing is treated as a single argument. Now I thought I was being smart by simply escaping the first " as well, like so: build.exe --in=\"$(ProjectDir)" --out=\"c:\out\" In that case the resulting args[] look like this: --path="c:\my project" --out="c:\out" The trailing backslash in the parameters is still swallowed and the first parameter is now split up. Passing this args[] to ndesk.options will then yield wrong results. How should the right command line look so that the correct elements end up in the correct args[] slots? Alternatively, how is one supposed to parse command line arguments like these with or without ndesk.options? Any suggestion is welcome. Thanks in advance

    Read the article

  • function getting wrong values

    - by frankie
    so i have this function in C to calculate a power, and i'm using visual c++ 2010 power.h void power(); float get_power(float a, int n); power.c void power() { float a, r; int n; printf("-POWER-\n"); printf("The base: "); scanf("%f", &a); n = -1; while (n < 0) { printf("The power: "); scanf("%d", &n); if (n < 0) { printf("Power must be equal or larger than 0!\n"); } else { r = get_power(a, n); printf("%.2f ^ %d = %.2f", a, n, r); } }; } float get_power(float a, int n) { if (n == 0) { return 1; } return a * get_power(a, n-1); } not the best way to do it, i know, but that's not it when i debug it the values are scanned correctly (that is, the values are correct until just before the function call) but then upon entering the function a becomes 0 and n becomes 1074790400, and you can guess what happens next... the first function is being called from the main file, i included the full code because i really have no idea what could be going on, and i can't even think on how to google for it... strangely, i wrote the function in a single file and it works fine, but it definitely should work both ways any idea why this is happening?

    Read the article

  • Python libusb pyusb "mach-o, but wrong architecture"

    - by Jon
    I am having some trouble with the pyusb module. I have narrowed down the problem to a single line, and have created a small example script to replicate the error. #!/usr/bin/env python """ This module was created to isolate the problem in the pyusb package. Operating system: Mac OS 10.6.3 Python Version: 2.6.4 libusb 1.0.8 has been successfully installed using: sudo port install libusb I have also tried modifying /opt/local/etc/macports/macports.conf to force the i386 architecture instead of x86_64. """ from ctypes import * import ctypes.util libname = ctypes.util.find_library('usb-1.0') print 'libname: ', libname l = CDLL(libname, RTLD_GLOBAL) # RESULT: #libname: /usr/local/lib/libusb-1.0.dylib #Traceback (most recent call last): # File "./pyusb_problem.py", line 7, in <module> # l = CDLL(libname, RTLD_GLOBAL) # File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ctypes/__init__.py", line 353, in __init__ # self._handle = _dlopen(self._name, mode) #OSError: dlopen(/usr/local/lib/libusb-1.0.dylib, 10): no suitable image found. Did find: # /usr/local/lib/libusb-1.0.dylib: mach-o, but wrong architecture # End of File This same script runs on Ubuntu 10.04 successfully. I have tried building the libusb module (directly from source AND through macports) for 32-bit (i386) instead of x86_64 (default for OS 10.6), but I receive the same error. Thank-you in advance for your help!

    Read the article

  • One Model to Rule Them All - VS2010 UML, ADO.NET Entity Data Model, and T4

    - by Eric J.
    I worked on a fairly large project a while back where we modeled the classes in Enterprise Architect and generated the (partial) POCO classes (complete with model-driven business rule validations), persistence (NHibernate mapping file) and DDL. Based on certain model attributes we could flag alternate generation strategies or indicate that a particular portion would be entirely hand-coded. There was a good deal of initial investment, but it paid large dividends over the lifetime of a 15 developer, 3 year project. I'm investigating doing something similar with the current Microsoft technology stack. The place I'm stuck is that class modeling is done with the VS 2010 UML tools, but logical data modeling is done with Entity Data Modeler. Is it a reasonable path to use VS 2010 UML as the "single source of truth" and code generate the edmx files based on the class model? That's the inverse of the common path to create the entity model and use a POCO generator to generate classes. However, a good class model can be used to generate much more than just the properties so I tend to view it as a better choice than the entity model.

    Read the article

  • How can I return json from my WCF rest service (.NET 4), using Json.Net, without it being a string,

    - by Samuel Meacham
    The DataContractJsonSerializer is unable to handle many scenarios that Json.Net handles just fine when properly configured (specifically, cycles). A service method can either return a specific object type (in this case a DTO), in which case the DataContractJsonSerializer will be used, or I can have the method return a string, and do the serialization myself with Json.Net. The problem is that when I return a json string as opposed to an object, the json that is sent to the client is wrapped in quotes. Using DataContractJsonSerializer, returning a specific object type, the response is: {"Message":"Hello World"} Using Json.Net to return a json string, the response is: "{\"Message\":\"Hello World\"}" I do not want to have to eval() or JSON.parse() the result on the client, which is what I would have to do if the json comes back as a string, wrapped in quotes. I realize that the behavior is correct; it's just not what I want/need. I need the raw json; the behavior when the service method's return type is an object, not a string. So, how can I have my method return an object type, but not use the DataContractJsonSerializer? How can I tell it to use the Json.Net serializer instead? Or, is there someway to directly write to the response stream? So I can just return the raw json myself? Without the wrapping quotes? Here is my contrived example, for reference: [DataContract] public class SimpleMessage { [DataMember] public string Message { get; set; } } [ServiceContract] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)] public class PersonService { // uses DataContractJsonSerializer // returns {"Message":"Hello World"} [WebGet(UriTemplate = "helloObject")] public SimpleMessage SayHelloObject() { return new SimpleMessage("Hello World"); } // uses Json.Net serialization, to return a json string // returns "{\"Message\":\"Hello World\"}" [WebGet(UriTemplate = "helloString")] public string SayHelloString() { SimpleMessage message = new SimpleMessage() { Message = "Hello World" }; string json = JsonConvert.Serialize(message); return json; } // I need a mix of the two. Return an object type, but use the Json.Net serializer. }

    Read the article

  • Dealing with asynchronous control structures (Fluent Interface?)

    - by Christophe Herreman
    The initialization code of our Flex application is doing a series of asynchronous calls to check user credentials, load external data, connecting to a JMS topic, etc. Depending on the context the application runs in, some of these calls are not executed or executed with different parameters. Since all of these calls happen asynchronously, the code controlling them is hard to read, understand, maintain and test. For each call, we need to have some callback mechanism in which we decide what call to execute next. I was wondering if anyone had experimented with wrapping these calls in executable units and having a Fluent Interface (FI) that would connect and control them. From the top of my head, the code might look something like: var asyncChain:AsyncChain = execute(LoadSystemSettings) .execute(LoadAppContext) .if(IsAutologin) .execute(AutoLogin) .else() .execute(ShowLoginScreen) .etc; asyncChain.execute(); The AsyncChain would be an execution tree, build with the FI (and we could of course also build one without a FI). This might be an interesting idea for environments that run in a single threaded model like the Flash Player, Silverlight, JavaFX?, ... Before I dive into the code to try things out, I was hoping to get some feedback.

    Read the article

  • 3x3 Sobel operator and gradient features

    - by pithyless
    Reading a paper, I'm having difficulty understanding the algorithm described: Given a black and white digital image of a handwriting sample, cut out a single character to analyze. Since this can be any size, the algorithm needs to take this into account (if it will be easier, we can assume the size is 2^n x 2^m). Now, the description states given this image we will convert it to a 512-bit feature (a 512-bit hash) as follows: (192 bits) computes the gradient of the image by convolving it with a 3x3 Sobel operator. The direction of the gradient at every edge is quantized to 12 directions. (192 bits) The structural feature generator takes the gradient map and looks in a neighborhood for certain combinations of gradient values. (used to compute 8 distinct features that represent lines and corners in the image) (128 bits) Concavity generator uses an 8-point star operator to find coarse concavities in 4 directions, holes, and lagrge-scale strokes. The image feature maps are normalized with a 4x4 grid. I'm for now struggling with how to take an arbitrary image, split into 16 sections, and using a 3x3 Sobel operator to come up with 12 bits for each section. (But if you have some insight into the other parts, feel free to comment :)

    Read the article

  • What's the best way to fill or paint around an image in Java?

    - by wsorenson
    I have a set of images that I'm combining into a single image mosaic using JAI's MosaicDescriptor. Most of the images are the same size, but some are smaller. I'd like to fill in the missing space with white - by default, the MosaicDescriptor is using black. I tried setting the the double[] background parameter to { 255 }, and that fills in the missing space with white, but it also introduces some discoloration in some of the other full-sized images. I'm open to any method - there are probably many ways to do this, but the documentation is difficult to navigate. I am considering converting any smaller images to a BufferedImage and calling setRGB() on the empty areas (though I am unsure what to use for the scansize on the batch setRGB() method). My question is essentially: What is the best way to take an image (in JAI, or BufferedImage) and fill / add padding to a certain size? Is there a way to accomplish this in the MosaicDescriptor call without side-effects? For reference, here is the code that creates the mosaic: for (int i = 0; i < images.length; i++) { images[i] = JPEGDescriptor.create(new ByteArraySeekableStream(images[i]), null); if (i != 0) { images[i] = TranslateDescriptor.create(image, (float) (width * i), null, null, null); } } RenderedOp finalImage = MosaicDescriptor.create(ops, MosaicDescriptor.MOSAIC_TYPE_OVERLAY, null, null, null, null, null);

    Read the article

  • Where should I declare my CDI resources?

    - by Laird Nelson
    JSR-299 (CDI) introduces the (unfortunately named) concept of a resource: http://docs.jboss.org/weld/reference/1.0.0/en-US/html/resources.html#d0e4373 You can think of a resource in this nomenclature as a bridge between the Java EE 6 brand of dependency injection (@EJB, @Resource, @PersistenceContext and the like) and CDI's brand of dependency injection. The general gist seems to be that somewhere (and this will be the root of my question) you declare what amounts to a bridge class: it contains fields annotated both with Java EE's @EJB or @PersistenceContext or @Resource annotations and with CDI's @Produces annotations. The net effect is that Java EE 6 injects a persistence context, say, where it's called for, and CDI recognizes that injected PersistenceContext as a source for future injections down the line (handled by @Inject). My question is: what is the community's consensus--or is there one--on: what this bridge class should be named where this bridge class should live whether it's best to localize all this stuff into one class or make several of them ...? Left to my own devices, I was thinking of declaring a single class called CDIResources and using that as the One True Place to link Java EE's DI with CDI's DI. Many examples do something similar, but I'm not clear on whether they're "just" examples or whether that's a good way to do it. Thanks.

    Read the article

  • Modify Django Forms

    - by Ninefingers
    Hi All, I've recently been developing on the django platform and have stumbled upon Django Forms (forms.Form/forms.ModelForm) as ways of creating <form> html. Now, this is brilliant for quick stuff but what I'm trying to do is a little bit more complicated. Consider a DateField - my current form has fields for day, month and year and constructs a python date object from that. However, a django form creates a single textbox in which the correct format (say 2010-06-15) must be entered. As another example, for large fields I need to replace <input> with <textarea>. I'd like to take advantage of Django's forms for simple validation but I need something simpler for my users. So my question is: can I intercept the rendering of one of these objects to write out the html as I like? If so, do I have to do all the writing myself or can I only do those objects I wish to re-write? Thanks in advance.

    Read the article

  • How to get contacts in order of their upcoming birthdays?

    - by Pentium10
    I have code to read contact details and to read birthdays. But how do I get a list of contacts in order of their upcoming birthday? For a single contact identified by id, I get details and birthday like this: Cursor c = null; try { Uri uri = ContentUris.withAppendedId( ContactsContract.Contacts.CONTENT_URI, id); c = ctx.getContentResolver().query(uri, null, null, null, null); if (c != null) { if (c.moveToFirst()) { DatabaseUtils.cursorRowToContentValues(c, data); } } c.close(); // read birthday c = ctx.getContentResolver() .query( Data.CONTENT_URI, new String[] { Event.DATA }, Data.CONTACT_ID + "=" + id + " AND " + Data.MIMETYPE + "= '" + Event.CONTENT_ITEM_TYPE + "' AND " + Event.TYPE + "=" + Event.TYPE_BIRTHDAY, null, Data.DISPLAY_NAME); if (c != null) { try { if (c.moveToFirst()) { this.setBirthday(c.getString(0)); } } finally { c.close(); } } return super.load(id); } catch (Exception e) { Log.v(TAG(), e.getMessage(), e); e.printStackTrace(); return false; } finally { if (c != null) c.close(); } and the code to read all contacts is: public Cursor getList() { // Get the base URI for the People table in the Contacts content // provider. Uri contacts = ContactsContract.Contacts.CONTENT_URI; // Make the query. ContentResolver cr = ctx.getContentResolver(); // Form an array specifying which columns to return. String[] projection = new String[] { ContactsContract.Contacts._ID, ContactsContract.Contacts.DISPLAY_NAME }; Cursor managedCursor = cr.query(contacts, projection, null, null, ContactsContract.Contacts.DISPLAY_NAME + " COLLATE LOCALIZED ASC"); return managedCursor; }

    Read the article

  • branch prediction

    - by Alexander
    Consider the following sequence of actual outcomes for a single static branch. T means the branch is taken. N means the branch is not taken. For this question, assume that this is the only branch in the program. T T T N T N T T T N T N T T T N T N Assume a two-level branch predictor that uses one bit of branch history—i.e., a one-bit BHR. Since there is only one branch in the program, it does not matter how the BHR is concatenated with the branch PC to index the BHT. Assume that the BHT uses one-bit counters and that, again, all entries are initialized to N. Which of the branches in this sequence would be mis-predicted? Use the table below. Now I am not asking answers to this question, rather than guides and pointers on this. What does a two level branch predictor means and how does it works? What does the BHR and BHT stands for?

    Read the article

  • Call bindings for DependencyObject when DependencyProperites are changed

    - by melculetz
    Is there a way to notify a DependencyObject's bindinigs when the inner DependencyProperties have changed? For example, I have this class: public class BackgroundDef : DependencyObject { public static readonly DependencyProperty Color1Property = DependencyProperty.Register("Color1", typeof(Color), typeof(Background), new UIPropertyMetadata(Colors.White)); public static readonly DependencyProperty UseBothColorsProperty = DependencyProperty.Register("UseBothColors", typeof(bool), typeof(Background), new UIPropertyMetadata(false)); public static readonly DependencyProperty Color2Property = DependencyProperty.Register("Color2", typeof(Color), typeof(Background), new UIPropertyMetadata(Colors.White)); public Color Color1 { set { SetValue(Color1Property, value); } get { return (Color)GetValue(Color1Property); } } public bool UseBothColors { set { SetValue(UseBothColorsProperty, value); } get { return (bool)GetValue(UseBothColorsProperty); } } public Color Color2 { set { SetValue(Color2Property, value); } get { return (Color)GetValue(Color2Property); } } } For which I have 3 separate two-way bindings that set the values for Color1, Color2 and UseBothColors. But I also have a binding for a BackgroundDef instance, which should create a Brush and draw the background of a button (either a single color, or two gradient colors). My problem is that the two-way bindings for the DependencyProperties update the properties, but the binding for the class instance is not called, as apparently the entire object does not change. Any idea how I could call the bindings for the DependencyObject when the DependencyProperties are changed?

    Read the article

  • IIS error hosting WCF Data Service on shared web host

    - by jkohlhepp
    My client has a website hosted on a shared web server. I don't have access to IIS. I am trying to deploy a WCF Data Service onto his site. I am getting this error: IIS specified authentication schemes 'IntegratedWindowsAuthentication, Anonymous', but the binding only supports specification of exactly one authentication scheme. Valid authentication schemes are Digest, Negotiate, NTLM, Basic, or Anonymous. Change the IIS settings so that only a single authentication scheme is used. I have searched SO and other sites quite a bit but can't seem to find someone with my exact situation. I cannot change the IIS settings because this is a third party's server and it is a shared web server. So my only option is to change things in code or in the service config. My service config looks like this: <system.serviceModel xdt:Transform="Insert"> <serviceHostingEnvironment> <baseAddressPrefixFilters> <add prefix="http://www.somewebsite.com"/> </baseAddressPrefixFilters> </serviceHostingEnvironment> <bindings> <webHttpBinding> <binding name="{Binding Name}" > <security mode="None" /> </binding> </webHttpBinding> </bindings> <services> <service name="{Namespace to Service}"> <endpoint address="" binding="webHttpBinding" bindingConfiguration="{Binding Name}" contract="System.Data.Services.IRequestHandler"> </endpoint> </service> </services> </system.serviceModel> As you can see I tried to set the security mode to "None" but that didn't seem to help. What should I change to resolve this error?

    Read the article

  • Plot in GNU PLOT

    - by guddi
    I have to plot many lines in GNU PLOT.No problem with the X axis. The problem that I am facing is that most of the plotted lines are at yscale [0-0.05] ,few at range 60-70 and rest at 600-700. These numbers correspond to the y axis scale values. But after I plot I can see only 3 sets of lines all messed up. There is no clearity between the lines. Line at 0 and the line at 0.003 look like one single line. If I set yrange[0:0.05], the lines between this range are clearly vissible. But I want all the lines in the same graph. I have heard of breaking axis's and multi plotting..Can they be useful? how to implement them. Anyone pls help me. Below is the sript set terminal png size 1300,1200 enhanced font 'Verdana,20 set output ' output .png’ set key font 'Verdana,16' set key bottom outside set yrange[500:1000] set xtics("25k" 25000,"50k" 50000,"75k" 75000,"100k" 100000) set grid set title 'Performance Metrics' set ylabel 'Metrices' set xlabel 'FES' plot ' input ' using 1:2 title ' A' with linespoints linewidth 4, ' input ' using 1:3 title B'with linespoints linewidth 4, 'input ' using 1:4 title ' c' with linespoints linewidth 4, 'input ' using 1:5 title 'D' with linespoints linewidth 4, ' input ' using 1:6 title 'E' with linespoints linewidth 4, ' input ' using 1:7 title 'F' with linespoints linewidth 4, ' input ' using 1:8 title 'G' with linespoints linewidth 4, ' input ' using 1:9 title ' H ' with linespoints linewidth 4, ' input ' using 1:10 title ' I' Metric ' with linespoints linewidth 4 set output set terminal windows input.dat is something like this: 25 0.002 0.05 899 455 444 0.08 0.00004 900 700 0.003 This way i have other rows. I have shown only the first one

    Read the article

  • How does lucene index documents?

    - by Mehdi Amrollahi
    Hello, I read some document about Lucene; also I read the document in this link (http://lucene.sourceforge.net/talks/pisa). I don't really understand how Lucene indexes documents and don't understand which algorithms Lucene uses for indexing? On the above link, it says Lucene uses this algorithm for indexing: incremental algorithm: maintain a stack of segment indices create index for each incoming document push new indexes onto the stack let b=10 be the merge factor; M=8 for (size = 1; size < M; size *= b) { if (there are b indexes with size docs on top of the stack) { pop them off the stack; merge them into a single index; push the merged index onto the stack; } else { break; } } How does this algorithm provide optimized indexing? Does Lucene use B-tree algorithm or any other algorithm like that for indexing - or does it have a particular algorithm? Thank you for reading my post.

    Read the article

  • Simulating Google Appengine's Task Queue with Gearman

    - by sotangochips
    One of the characteristics I love most about Google's Task Queue is its simplicity. More specifically, I love that it takes a URL and some parameters and then posts to that URL when the task queue is ready to execute the task. This structure means that the tasks are always executing the most current version of the code. Conversely, my gearman workers all run code within my django project -- so when I push a new version live, I have to kill off the old worker and run a new one so that it uses the current version of the code. My goal is to have the task queue be independent from the code base so that I can push a new live version without restarting any workers. So, I got to thinking: why not make tasks executable by url just like the google app engine task queue? The process would work like this: User request comes in and triggers a few tasks that shouldn't be blocking. Each task has a unique URL, so I enqueue a gearman task to POST to the specified URL. The gearman server finds a worker, passes the url and post data to a worker The worker simply posts to the url with the data, thus executing the task. Assume the following: Each request from a gearman worker is signed somehow so that we know it's coming from a gearman server and not a malicious request. Tasks are limited to run in less than 10 seconds (There would be no long tasks that could timeout) What are the potential pitfalls of such an approach? Here's one that worries me: The server can potentially get hammered with many requests all at once that are triggered by a previous request. So one user request might entail 10 concurrent http requests. I suppose I could have a single worker with a sleep before every request to rate-limit. Any thoughts?

    Read the article

  • Deploy multiple instances of an EAR (representing versions) to Glassfish

    - by Thorbjørn Ravn Andersen
    I basically want to be able to deploy multiple versions of the same EAR file to the same server (Glassfish instance?) , and have a unique path to each version separating them. From my reading on this it appears that multiple EARs deploy to the root of the web server namespace so that they can coexist if they do not have colliding context-root's of WAR's. In my case I'd rather have that instead of everything going under "/", I'd like to be able to brand a given EAR-file build to ALWAYS deploy under a given path like "/foo-20100319" or "/foo-CUSTOMER-20010101". This can easily be done with a single WAR file just by renaming it. I do not need or want them to disturb each other. It is my understanding that this remapping is outside the scope of the application.xml file, so I found that http://docs.sun.com/app/docs/doc/820-7693/beayr?a=view says that I can specify web-uri and context-root, but I am not certain that what I wish to do, can be specified with these in Glassfish. How should I approach this? I have full control over the build process. (I have found http://stackoverflow.com/questions/877390/deploying-multiple-java-web-apps-to-glassfish-in-one-go but I am not certain how to apply this to what I need).

    Read the article

  • How to improve the use of Delphi Frames

    - by Brian Frost
    I've used frames in Delphi for years, and they are one of the most powerful features of the VCL, but standard use of them seems to have some risk such as: It's easy to accidentally move or edit the frame sub-components on a frame's host form without realising that you are 'tweaking' with the frame - I know this does not affect the original frame code, but it's generally not what you would want. When working with the frame you are still exposed to its sub-components for visual editing, even when that frame is years old and should not be touched. So I got to thinking.... Is there a way of 'grouping' components such that their positions are 'locked'? This would be useful for finished forms as well as frames. Often other developers return code to me where only the form bounds have changed and even they did not intend any change. Is there any way of turning a frame and its components into a single Delphi component? If so, the frame internals would be completely hidden and its useability would increase further. I'm interested in any thoughts... Brian.

    Read the article

  • Malloc corrupting already malloc'd memory in C

    - by Kyte
    I'm currently helping a friend debug a program of his, which includes linked lists. His list structure is pretty simple: typedef struct nodo{ int cantUnos; char* numBin; struct nodo* sig; }Nodo; We've got the following code snippet: void insNodo(Nodo** lista, char* auxBin, int auxCantUnos){ printf("*******Insertando\n"); int i; if (*lista) printf("DecInt*%p->%p\n", *lista, (*lista)->sig); Nodo* insert = (Nodo*)malloc(sizeof(Nodo*)); if (*lista) printf("Malloc*%p->%p\n", *lista, (*lista)->sig); insert->cantUnos = auxCantUnos; insert->numBin = (char*)malloc(strlen(auxBin)*sizeof(char)); for(i=0 ; i<strlen(auxBin) ; i++) insert->numBin[i] = auxBin[i]; insert-numBin[i] = '\0'; insert-sig = NULL; Nodo* aux; [etc] (The lines with extra indentation were my addition for debug purposes) This yields me the following: *******Insertando DecInt*00341098->00000000 Malloc*00341098->2832B6EE (*lista)-sig is previously and deliberately set as NULL, which checks out until here, and fixed a potential buffer overflow (he'd forgotten to copy the NULL-terminator in insert-numBin). I can't think of a single reason why'd that happen, nor I've got any idea on what else should I provide as further info. (Compiling on latest stable MinGW under fully-patched Windows 7, friend's using MinGW under Windows XP. On my machine, at least, in only happens when GDB's not attached.) Any ideas? Suggestions? Possible exorcism techniques? (Current hack is copying the sig pointer to a temp variable and restore it after malloc. It breaks anyways. Turns out the 2nd malloc corrupts it too. Interestingly enough, it resets sig to the exact same value as the first one).

    Read the article

  • C# Regex - Replace multiple characters at once without overwriting?

    - by Everaldo Aguiar
    Hello guys, I'm implementing a c# program that should automatize a Mono-alphabetic substitution cipher. The functionality i'm working on at the moment is the simplest one: The user will provide a plain text and a cipher alphabet, for example: Plain text(input): THIS IS A TEST Cipher alphabet: A - Y, H - Z, I - K, S - L, E - J, T - Q Cipher Text(output): QZKL KL QJLQ I thought of using regular expressions since I've been programming in perl for a while, but I'm encountering some problems on c#. First I would like to know if someone would have a suggestion for a regular expression that would replace all occurrence of each letter by its corresponding cipher letter (provided by user) at once and without overwriting anything. Example: In this case, user provides plaintext "TEST", and on his cipher alphabet, he wishes to have all his T's replaced with E's, E's replaced with Y and S replaced with J. My first thought was to substitute each occurrence of a letter with an individual character and then replace that character by the cipherletter corresponding to the plaintext letter provided. Using the same example word "TEST", the steps taken by the program to provide an answer would be: 1 - replace T's with (lets say) @ 2 - replace E's with # 3 - replace S's with & 4 - Replace @ with E, # with Y, & with j 5 - Output = EYJE This solution doesn't seem to work for large texts. I would like to know if anyone can think of a single regular expression that would allow me to replace each letter in a given text by its corresponding letter in a 26-letter cipher alphabet without the need of splitting the task in an intermediate step as I mentioned. If it helps visualize the process, this is a print screen of my GUI for the program: http://img43.imageshack.us/img43/2118/11618743.jpg

    Read the article

  • Using Mapping Models to migrate between Core Data Object Models

    - by westsider
    I have a fairly simply scheme. Essentially, Run <-- Data (where a Run holds a data, e.g., Temperature, sampled from some sort of sensor). Now, it seems that sensors can have more than one measurement (e.g., Temperature and Humidity). So, a single Run could have multiple data samples. Hence, Run <-- Sample and Sample <-- Data. (And for simplicity I am leaving Run <-- Data in place, for now.) If I create a new mapping model, then things generally work - except that no new Samples are created, no relationships are established between Runs and Samples nor between Samples and Datas. I am trying to get mapping model to migrate my model but even the slightest change to the generated mapping model results in Cocoa error 134110. For example, if I take the "Sample" mapping (which has no Source) and set its Source to 'Run' (so that I can set Sample's inverse relationship 'run' appropriately) then the mapping changes its name to "RunToSample". There are two relationships handled in this mapping: data and run. The data property gets set automatically to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "DataToData", $source.dataSet) Following this example, I set the run property to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToRun", $source) Similarly, I set the 'sample' property mapping in RunToRun to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source) and the 'sample' property in DataToData to FUNCTION($manager, "destinationInstancesForEntityMappingNamed:sourceInstances:" , "RunToSample", $source.run) So, what, I wonder, is going wrong? I have tried various permutations, such as leaving the 'inverse' relationships unspecified. But I continue to get the same error (134110) regardless. I imagine that this is a lot easier than it seems and that I am missing some fundamental but minor piece. I have also tried subclassing NSEntityMigrationPolicy and overriding -createDestinationInstancesForSourceInstance: but these efforts have met with much the same results. Thanks in advance for any pointers or (relevant :-) advice.

    Read the article

  • Storing statistics of multple data types in SQL Server 2008

    - by Mike
    I am creating a statistics module in SQL Server 2008 that allows users to save data in any number of formats (date, int, decimal, percent, etc...). Currently I am using a single table to store these values as type varchar, with an extra field to denote the datatype that it should be. When I display the value, I use that datatype field to format it. I use sprocs to calculate the data for reporting; and the datatype field to convert to the appropriate datatype for the appropriate calculations. This approach works, but I don't like storing all kinds of data in a varchar field. The only alternative that I can see is to have separate tables for each datatype I want to store, and save the record information to the appropriate table based on datatype. To retreive, I run a case statement to join the appropriate table and get the data. This seems to solve. This however, seems like a lot of work for ... what gain? Wondering if I'm missing something here. Is there a better way to do this? Thanks in advance!

    Read the article

  • How to setup matlabpool for multiple processors?

    - by JohnIdol
    I just setup a Extra Large Heavy Computation EC2 instance to throw it at my Genetic Algorithms problem, hoping to speed up things. This instance has 8 Intel Xeon processors (around 2.4Ghz each) and 7 Gigs of RAM. On my machine I have an Intel Core Duo, and matlab is able to work with my two cores just fine by runinng: matlabpool open 2 On the EC2 instance though, matlab only is capable of detecting 1 out of 8 processors, and if I try running: matlabpool open 8 I get an error saying that the ClusterSize is 1 since there's only 1 core on my CPU. True, there is only 1 core on each CPU, but I have 8 CPUs on the given EC2 instance! So the difference from my machine and the ec2 instance is that I have my 2 cores on a single processor locally, while the EC2 instance has 8 distinct processors. My question is, how do I get matlab to work with those 8 processors? I found this paper, but it seems related to setting up matlab with multiple EC2 instances (not related to multiple processors on the same instance, EC2 or not), which is not my problem. Any help appreciated!

    Read the article

  • DATE lookup table (1990/01/01:2041/12/31)

    - by Frank Developer
    I use a DATE's master table for looking up dates and other values in order to control several events, intervals and calculations within my app. It has rows for every single day begining from 01/01/1990 to 12/31/2041. One example of how I use this lookup table is: A customer pawned an item on: JAN-31-2010 Customer returns on MAY-03-2010 to make an interest pymt to avoid forfeiting the item. If he pays 1 months interest, the employee enters a "1" and the app looks-up the pawn date (JAN-31-2010) in date master table and puts FEB-28-2010 in the applicable interest pymt date. FEB-28 is returned because FEB-31's dont exist! If 2010 were a leap-year, it would've returned FEB-29. If customer pays 2 months, MAR-31-2010 is returned. 3 months, APR-30... If customer pays more than 3 months or another period not covered by the date lookup table, employee manually enters the applicable date. Here's what the date lookup table looks like: { Copyright 1990:2010, Frank Computer, Inc. } { DBDATE=YMD4- (correctly sorted for faster lookup) } CREATE TABLE datemast ( dm_lookup DATE, {lookup col used for obtaining values below} dm_workday CHAR(2), {NULL=Normal Working Date,} {NW=National Holiday(Working Date),} {NN=National Holiday(Non-Working Date),} {NH=National Holiday(Half-Day Working Date),} {CN=Company Proclamated(Non-Working Date),} {CH=Company Proclamated(Half-Day Working Date)} {several other columns omitted} dm_description CHAR(30), {NULL, holiday description or any comments} dm_day_num SMALLINT, {number of elapsed days since begining of year} dm_days_left SMALLINT, (number of remaining days until end of year} dm_plus1_mth DATE, {plus 1 month from lookup date} dm_plus2_mth DATE, {plus 2 months from lookup date} dm_plus3_mth DATE, {plus 3 months from lookup date} dm_fy_begins DATE, {fiscal year begins on for lookup date} dm_fy_ends DATE, {fiscal year ends on for lookup date} dm_qtr_begins DATE, {quarter begins on for lookup date} dm_qtr_ends DATE, {quarter ends on for lookup date} dm_mth_begins DATE, {month begins on for lookup date} dm_mth_ends DATE, {month ends on for lookup date} dm_wk_begins DATE, {week begins on for lookup date} dm_wk_ends DATE, {week ends on for lookup date} {several other columns omitted} ) IN "S:\PAWNSHOP.DBS\DATEMAST"; Is there a better way of doing this or is it a cool method?

    Read the article

< Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >