Search Results

Search found 27928 results on 1118 pages for 'quasar the space thing'.

Page 147/1118 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • How to manage GetDate() with Entity Framework

    - by wcpro
    I have a column like this in 1 of my database tables DateCreated, datetime, default(GetDate()), not null I am trying to use the Entity Framework to do an insert on this table like this... PlaygroundEntities context = new PlaygroundEntities(); Person p = new Person { Status = PersonStatus.Alive, BirthDate = new DateTime(1982,3,18), Name = "Joe Smith" }; context.AddToPeople(p); context.SaveChanges(); When i run this code i get the following error The conversion of a datetime2 data type to a datetime data type resulted in an out-of-range value.\r\nThe statement has been terminated. So i tried setting the StoreGeneratedPattern to computed... same thing, then identity... same thing. Any ideas?

    Read the article

  • Parallelism in .NET – Part 2, Simple Imperative Data Parallelism

    - by Reed
    In my discussion of Decomposition of the problem space, I mentioned that Data Decomposition is often the simplest abstraction to use when trying to parallelize a routine.  If a problem can be decomposed based off the data, we will often want to use what MSDN refers to as Data Parallelism as our strategy for implementing our routine.  The Task Parallel Library in .NET 4 makes implementing Data Parallelism, for most cases, very simple. Data Parallelism is the main technique we use to parallelize a routine which can be decomposed based off data.  Data Parallelism refers to taking a single collection of data, and having a single operation be performed concurrently on elements in the collection.  One side note here: Data Parallelism is also sometimes referred to as the Loop Parallelism Pattern or Loop-level Parallelism.  In general, for this series, I will try to use the terminology used in the MSDN Documentation for the Task Parallel Library.  This should make it easier to investigate these topics in more detail. Once we’ve determined we have a problem that, potentially, can be decomposed based on data, implementation using Data Parallelism in the TPL is quite simple.  Let’s take our example from the Data Decomposition discussion – a simple contrast stretching filter.  Here, we have a collection of data (pixels), and we need to run a simple operation on each element of the pixel.  Once we know the minimum and maximum values, we most likely would have some simple code like the following: for (int row=0; row < pixelData.GetUpperBound(0); ++row) { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This simple routine loops through a two dimensional array of pixelData, and calls the AdjustContrast routine on each pixel. As I mentioned, when you’re decomposing a problem space, most iteration statements are potentially candidates for data decomposition.  Here, we’re using two for loops – one looping through rows in the image, and a second nested loop iterating through the columns.  We then perform one, independent operation on each element based on those loop positions. This is a prime candidate – we have no shared data, no dependencies on anything but the pixel which we want to change.  Since we’re using a for loop, we can easily parallelize this using the Parallel.For method in the TPL: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Here, by simply changing our first for loop to a call to Parallel.For, we can parallelize this portion of our routine.  Parallel.For works, as do many methods in the TPL, by creating a delegate and using it as an argument to a method.  In this case, our for loop iteration block becomes a delegate creating via a lambda expression.  This lets you write code that, superficially, looks similar to the familiar for loop, but functions quite differently at runtime. We could easily do this to our second for loop as well, but that may not be a good idea.  There is a balance to be struck when writing parallel code.  We want to have enough work items to keep all of our processors busy, but the more we partition our data, the more overhead we introduce.  In this case, we have an image of data – most likely hundreds of pixels in both dimensions.  By just parallelizing our first loop, each row of pixels can be run as a single task.  With hundreds of rows of data, we are providing fine enough granularity to keep all of our processors busy. If we parallelize both loops, we’re potentially creating millions of independent tasks.  This introduces extra overhead with no extra gain, and will actually reduce our overall performance.  This leads to my first guideline when writing parallel code: Partition your problem into enough tasks to keep each processor busy throughout the operation, but not more than necessary to keep each processor busy. Also note that I parallelized the outer loop.  I could have just as easily partitioned the inner loop.  However, partitioning the inner loop would have led to many more discrete work items, each with a smaller amount of work (operate on one pixel instead of one row of pixels).  My second guideline when writing parallel code reflects this: Partition your problem in a way to place the most work possible into each task. This typically means, in practice, that you will want to parallelize the routine at the “highest” point possible in the routine, typically the outermost loop.  If you’re looking at parallelizing methods which call other methods, you’ll want to try to partition your work high up in the stack – as you get into lower level methods, the performance impact of parallelizing your routines may not overcome the overhead introduced. Parallel.For works great for situations where we know the number of elements we’re going to process in advance.  If we’re iterating through an IList<T> or an array, this is a typical approach.  However, there are other iteration statements common in C#.  In many situations, we’ll use foreach instead of a for loop.  This can be more understandable and easier to read, but also has the advantage of working with collections which only implement IEnumerable<T>, where we do not know the number of elements involved in advance. As an example, lets take the following situation.  Say we have a collection of Customers, and we want to iterate through each customer, check some information about the customer, and if a certain case is met, send an email to the customer and update our instance to reflect this change.  Normally, this might look something like: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } } Here, we’re doing a fair amount of work for each customer in our collection, but we don’t know how many customers exist.  If we assume that theStore.GetLastContact(customer) and theStore.EmailCustomer(customer) are both side-effect free, thread safe operations, we could parallelize this using Parallel.ForEach: Parallel.ForEach(customers, customer => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { theStore.EmailCustomer(customer); customer.LastEmailContact = DateTime.Now; } }); Just like Parallel.For, we rework our loop into a method call accepting a delegate created via a lambda expression.  This keeps our new code very similar to our original iteration statement, however, this will now execute in parallel.  The same guidelines apply with Parallel.ForEach as with Parallel.For. The other iteration statements, do and while, do not have direct equivalents in the Task Parallel Library.  These, however, are very easy to implement using Parallel.ForEach and the yield keyword. Most applications can benefit from implementing some form of Data Parallelism.  Iterating through collections and performing “work” is a very common pattern in nearly every application.  When the problem can be decomposed by data, we often can parallelize the workload by merely changing foreach statements to Parallel.ForEach method calls, and for loops to Parallel.For method calls.  Any time your program operates on a collection, and does a set of work on each item in the collection where that work is not dependent on other information, you very likely have an opportunity to parallelize your routine.

    Read the article

  • Spring - adding BindingResult to newly created model attribute

    - by Max
    My task is - to create a model attribute by given request parameters, to validate it (in same method) and to give it whole to the View. I was given this code: //Create the model attribute by request parameters Promotion promotion = Promotions.get(someRequestParam); //Add the attribute to the model modelMap.addAttribute("promotion", promotion); if (!promotion.validate()) { BindingResult errors = new BeanPropertyBindingResult(promotion, "promotion"); errors.reject("promotion.invalid"); //TODO: This is the part I don't like model.put(BindingResult.MODEL_KEY_PREFIX + "promotion", errors); } This thing sure works, but that part with creating key with MODEL_KEY_PREFIX and attribute name looks very hackish and not a Spring style to me. Is there a way to make the same thing prettier?

    Read the article

  • Language-agnostic term for typed things that need memory

    - by FredOverflow
    Is there an accepted general term that subsumes the concepts of variables, class instances and arrays? Basically "any typed thing that needs memory". In C++, such a thing is called an object, but I'm looking for a more language-agnostic term. § 1.8 The C++ object model 1 The constructs in a C++ program create, destroy, refer to, access, and manipulate objects. An object is a region of storage. [...] An object can have a name (Clause 3). An object has a storage duration (3.7) which influences its lifetime (3.8). An object has a type (3.9).

    Read the article

  • RIA Services versus WCF services: what is a difference

    - by Budda
    There are a lot of information how to build Silverlight application using .NET RIA services, but it isn't clear what is unique thing in RIA that is absent in WCF? Here are few topics that are talking around this topic: http://stackoverflow.com/questions/1647225/ria-services-versus-wcf-services http://stackoverflow.com/questions/945123/net-ria-services-wcf-services But they doesn't give an answer to the question. Sorry for the stupid question, but what does "RIA Services" layer bring into your app if you already have "Silverlight <-- WCF Service <-- Business Logic <-- Entity Framework Model <-- Database"? Authentication? Validation? Is it relly asset for you? At the moment the only thing I see: with RIA services usage you don't need to host WCF service manually and don't need to configure any references on the client side (clien side == Silverlight application). Probably I don't know some very useful features of the RIA Services? So could you please point me to the good doc for that? Many thanks.

    Read the article

  • CSRF (Cross-site request forgery) attack example and prevention in PHP

    - by Saif Bechan
    I have an website where people can place a vote like this: http://mysite.com/vote/25 This will place a vote on item 25. I want to only make this available for registered users, and only if they want to do this. Now I know when someone is busy on the website, and someone gives them a link like this: http://mysite.com/vote/30 then the vote will be places for him on the item without him wanting to do this. I have read the explanation on the OWASP website, but i don't really understand it Is this an example of CSFR, and how can I prevent this. The best thing i can think off is adding something to the link like a hash. But this will be quite irritating to put something on the end of all the links. Is there no other way of doing this. Another thing can someone maybe give me some other example of this, because the website seems fairly fugue to me.

    Read the article

  • Graphviz DOT arrange Nodes in circles

    - by Ivo Wetzel
    OK here's my problem, I'm generating a graph of a python module, including all the files with their functions/methods/classes. I want to arrange it so, that nodes gather in circles around their parent nodes, currently everything is on one gargantuan horizontal row, which makes the thing 50k pixels wide and also let's the svg converter fail(only renders about the half of the graph). I went trough the docs(http://www.graphviz.org/doc/info/attrs.html) but couldn't find anything that seems to do the trick. So the question is: Is there a simple way to do this or do I have to layout the whole thing by myself? :/

    Read the article

  • Question About Example In Robert C Martin's _Clean Code_

    - by Jonah
    This is a question about the concept of a function doing only one thing. It won't make sense without some relevant passages for context, so I'll quote them here. They appear on pgs 37-38: To say this differently, we want to be able to read the program as though it were a set of TO paragraphs, each of which is describing the current level of abstraction and referencing subsequent TO paragraphs at the next level down. To include the setups and teardowns, we include setups, then we include the test page content, and then we include the teardowns. To include the setups, we include the suite setup if this is a suite, then we include the regular setup. It turns out to be very dif?cult for programmers to learn to follow this rule and write functions that stay at a single level of abstraction. But learning this trick is also very important. It is the key to keeping functions short and making sure they do “one thing.” Making the code read like a top-down set of TO paragraphs is an effective technique for keeping the abstraction level consistent. He then gives the following example of poor code: public Money calculatePay(Employee e) throws InvalidEmployeeType { switch (e.type) { case COMMISSIONED: return calculateCommissionedPay(e); case HOURLY: return calculateHourlyPay(e); case SALARIED: return calculateSalariedPay(e); default: throw new InvalidEmployeeType(e.type); } } and explains the problems with it as follows: There are several problems with this function. First, it’s large, and when new employee types are added, it will grow. Second, it very clearly does more than one thing. Third, it violates the Single Responsibility Principle7 (SRP) because there is more than one reason for it to change. Fourth, it violates the Open Closed Principle8 (OCP) because it must change whenever new types are added. Now my questions. To begin, it's clear to me how it violates the OCP, and it's clear to me that this alone makes it poor design. However, I am trying to understand each principle, and it's not clear to me how SRP applies. Specifically, the only reason I can imagine for this method to change is the addition of new employee types. There is only one "axis of change." If details of the calculation needed to change, this would only affect the submethods like "calculateHourlyPay()" Also, while in one sense it is obviously doing 3 things, those three things are all at the same level of abstraction, and can all be put into a TO paragraph no different from the example one: TO calculate pay for an employee, we calculate commissioned pay if the employee is commissioned, hourly pay if he is hourly, etc. So aside from its violation of the OCP, this code seems to conform to Martin's other requirements of clean code, even though he's arguing it does not. Can someone please explain what I am missing? Thanks.

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • ModelBinding to and EntitySet (MVC2 & LinqToSQL)

    - by Myster
    Hi all There seems to be an issue with the default model binder when binding to an EntitySet causes the EntitySet to be empty. The problem is described here and here: Microsoft's response is: ... We have now fixed this and the fix will be included in .NET Framework 4.0. With the new behavior, if the EntitySet passed into Assign is the same object as the one its being assigned to, no action will occur. With a work around being to edit the code generated like so: public override EntitySet<Thing> Things { get { return this._Things; } set { //CORRECTION: _Things= new EntitySet<Thing>(); _Things.Assign(value); //WAS: this._Things.Assign(value); } } As you can imagine this is not ideal as you have to re-add the code every time, does anyone have a better solution?

    Read the article

  • How to insert a radio group inside data-grid using jQuery EasyUI framework?

    - by android phonegap
    I have a rough model of my application which looks some like as shown in picture below: I am using jquery easyui data-grid framework to get this but i am not able to insert radio group type as one of my column as shown in Status column of my picture. Can anyone please help me how to insert radio button inside data-grid table? And other thing is, is the datagrid is only way to get these type of functions or any other way through which we can get same thing? If anyone know any other way please help me. Thank you.

    Read the article

  • 24 hours per day and freelance programming jobs

    - by Luca
    I'm working on stimulanting projects at my job. I like it. I like programming! I have accumulated several years of experience now. Sometime happens I develop other projects (even more stimulant of my main job). Some more money cannot hurts! The problem is that my free time has decreased a lot, leading me to develop until late evening. I usually program each day (I like to develop my own projects, even if only a few lines at a time). But it is one thing to plan for my pleasure, it is one thing to plan for business. So, my question is how to balance free time with these additional jobs? What experiences do you have? How much you can develop for long time (in a medium interval, say, weeks)? Every thought is welcome!

    Read the article

  • line break problem with MultiCell in FPDF

    - by user156073
    I am using java port of fpdf. I am encountering fowwlowing errors. 1).When i call multicell 2 times every time the text is printed on a new line. MultiCell(0, 1, "abcd", currentBorders, Alignment.LEFT, false); //prints on one line MultiCell(0, 1, "efg", currentBorders, Alignment.LEFT, false); //prints on next line I want that there is no line break after the call to multicell. How can i do it? 2)If i do the following thing then some part of my string gets printed on one line and some on next. MultiCell(getStringWidth(myString), 1, myStringcurrentBorders, Alignment.LEFT, false); 3)If i do the following thing then there are many blank lines after the line on which myString is printed. It works correctly if i use one 1 ans second parameter MultiCell(0, myFontSize, "123456", currentBorders, Alignment.LEFT, false); What is the problem?

    Read the article

  • Getting Types in Win32 Dll

    - by Usman
    Hello, I want to know the types and details in a plain Win32DLL just like we can get in case of COM.In COM every thing embed inside idl and results in TLB, here we get every thing , MSFT exposes APIS by which we can extract types. In case of Win32 I strongly needed types defined in it and all details of that type(e.g what are members in it and their types as well). Parsing PE file and looking up export table only gives the exported functions. I want all custom types(Win32 interfaces,classes and members details with types) defined in it. How? Regards Usman

    Read the article

  • Should functional programming be taught before imperative programming?

    - by Zifre
    It seems to me that functional programming is a great thing. It eliminates state and makes it much easier to automatically make code run in parallel. Many programmers who were first taught imperative programming styles find it very difficult to learn functional programming, because it is so different. I began to wonder if programmers who were taught functional programming first would find it hard to begin imperative programming. It seems like it would not be as hard as the other way around, so I thought it would be a good thing if more programmers were taught functional programming first. So, my question is, should functional programming be taught in school before imperative, and if so, why is it not more common to start with it?

    Read the article

  • How do I upload an NSImage(NSData)to Twitpic with OAMutableURLRequest?

    - by timothy5216
    I'm using OAConsumer in my xAuth twitterEngine and i'm adding Twitpic OAuth Echo to it. But it won't POST the NSData. here is some of my code: //other file NSArray *reps = [[imageToUpload image] representations]; NSData *imageData = [NSBitmapImageRep representationOfImageRepsInArray:reps usingType:NSJPEGFileType properties:nil]; [twitter testUploadImageData:imageData withMessage:@"Hello WORLD!!" toURL:[NSURL URLWithString:uploadURL.stringValue]]; // - (void)testUploadImageData:(NSData *)data withMessage:(NSString *)message toURL:(NSURL *)url; { //url = @"http://api.twitpic.com/2/upload.xml" //message = @"Hello WORLD!!" NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSString *String = [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding]; NSLog(@"dataString: %@",String); OAMutableURLRequest *request = [[OAMutableURLRequest alloc] initWithURL:url consumer:self.consumer token:_accessToken realm:nil signatureProvider:nil]; // Setup POST body [request setHTTPMethod:@"POST"]; //NSString *stringBoundary = [NSString stringWithString:@"0xKhTmLbOuNdArY"]; //NSString *contentType = [NSString stringWithFormat:@"multipart/form-data; boundary=%@", stringBoundary]; // NSString *stringBoundarySeparator = [NSString stringWithFormat:@"\r\n--%@\r\n", stringBoundary]; /* NSMutableString *postString = [NSMutableString string]; [postString appendString:@"\r\n"]; [postString appendString:stringBoundarySeparator]; [postString appendString:[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"message\"\r\n\r\n%@", message]]; [postString appendString:stringBoundarySeparator]; [postString appendString:[NSString stringWithFormat:@"Content-Disposition: form-data; name=\"media\"; filename=\"%@\"\r\n", @"file.jpg"]]; [postString appendString:@"Content-Type: image/jpg\r\n"]; [postString appendString:@"Content-Transfer-Encoding: binary\r\n\r\n"]; // Setting up the POST request's multipart/form-data body NSMutableData *postBody = [NSMutableData data]; [postBody appendData:[postString dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:data]; [request setHTTPBody:postBody]; */ [request setHTTPMethod:@"POST"]; NSString *thing = [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding]; NSLog(@"%@",thing); [request setParameters:[NSArray arrayWithObjects: [OARequestParameter requestParameterWithName:@"oauth_token" value:_accessToken.key], [OARequestParameter requestParameterWithName:@"X-Auth-Service-Provider" value:@"https://api.twitter.com/1/account/verify_credentials.json"], [OARequestParameter requestParameterWithName:@"key" value:@"my-key-here :P"], [OARequestParameter requestParameterWithName:@"message" value:message], //iv'e changed this many times. I was just trying this to see if it works [OARequestParameter requestParameterWithName:@"media" value:thing], nil]]; OAAsynchronousDataFetcher *dataFetcher = [[OAAsynchronousDataFetcher alloc] init]; [dataFetcher initWithRequest:request delegate:self didFinishSelector:@selector(uploadDidUpload:withData:) didFailSelector:@selector(uploadDidFail:withData:)]; [dataFetcher start]; [dataFetcher release]; [request release]; [pool drain]; } I'm authenticated but it still won't POST the data :(

    Read the article

  • Begin the Clone Wars Have!

    - by Antony Reynolds
    Creating a New Virtual Machine from an Existing Virtual Disk In previous posts I described how I set up an OEL6 machine under VirtualBox that can run an 11gR2 database and FMW 11.1.1.5.  That is great if you want the DB and FMW running in the same virtual image and it has served me well for some proof of concepts and also for some testing of different JVMs.  However I also wanted to run some testing of FMW with the database running on a separate physical machine.  So in this post I will show how to take a VirtualBox image and create a new image based on the disks from that original image. What are my Options? There is more than one way to skin a cat, or in this case to create two separate VMs that can run on different hardware.  Some of the options include: Create new virtual disk images for each new VM. Clone the existing disk images and point the new VM at the cloned images. Point the new VM at the existing snapshots. #1 is too much like hard work, install OEL twice, install a database again, install FMW again, run RCU again!  Life is too short! #2 is probably the safest way of doing things.  VirtualBox allows you to clone a disk image for use in a separate machine.  However this of course duplicates the disk and means that it is now occupying 3 times the space, once for the original disk and twice more for the two clones I would need. #3 is the most space efficient way of doing things.  It does mean however that I can only run the new “cloned” images if I have access to the original image because that is where the base snapshots reside.  However this is not a problem for me as long as I remember to keep all threee images together.  So this is the approach we will follow. Snapshot, What Snapshot? As we are going to create new virtual machines based on existing snapshots we need to figure out which snapshot to use.  We do this by opening the “Media Manager” from within VirtualBox and moving the mouse over the snapshot images until we find the snapshots we want – the snapshot name is identified in the “Attached to:” comment.  In my case I wanted the FMW installed snapshot because that had a database configured for FMW alongside the FMW software.  I made a note of the filename of that snapshot (actually I just noted the first 5 characters as that was all that was needed to uniquely identify the snapshot file). When we create the new machines we will point them at the snapshot filename we have just checked. Network or NotWork? Because we want the two new machines to communicate with each other when hosted in different physical machines we can’t use the default NAT networking mode without a lot of hassle.  But at the same time we need them to have fixed IP addresses relative to each other so that they can see each other whilst also being able to see the outside world. To achieve all these requirements I created two network adapters for each machine.  Adapter 1 was a standard NAT mapping.  This will allow each machine to get a dynamic IP address (10.0.2.15 by default) that can be used to access the external world through the VBox provided NAT gateway.  This is the same as the existing configuration. The second adapter I created as a bridged adapter.  This gives the virtual machine direct access to the host network card and by using fixed IP addresses each machine can see the other.  It is important to choose fixed IP addresses that are not routable across your internal network so you don’t get any clashes with other machines on your network.  Of course you could always get proper fixed IP addresses from your network people, but I have serveral people using my images and as long as I don’t have two instances of the same VM on the same network segment this is easier and avoids reconfiguring the network every time someone wants a copy of my VM.  If it is available I would suggest using the 10.0.3.* network as 10.0.2.* is the default NAT network.  You can check availability by pinging 10.0.3.1 and 10.0.3.2 from your host machine.  If it times out then you are probably safe to use that. Creating the New VMs Now that I had collected the data that I needed I went ahead and created the new VMs. When asked for a “Boot Hard Disk” I used the “Choose a virtual hard disk file…” link to find the snapshot I had previously selected and set that to be the existing hard disk.  I chose the previously existing SOA 11.1.1.5 install for both the new DB and FMW machines because that snapshot had the database with the RCU completed that I wanted for my DB machine and it had the SOA software installed which I wanted for my FMW machine. After the initial creation of the virtual machine go into the network setting section and enable a second adapter which will be bridged.  Make a note of the MAC addresses (the last four digits should be sufficient) of the two adapters so that you can later set the bridged adapter to use fixed IP and the NAT adapter to use DHCP. We are now ready to start the VMs and reconfigure Linux. Reconfiguring Linux Because I now have two new machines I need to change their network configuration.  In particular I need to change the hostname, update the hosts file and change the network settings. Changing the Hostname I renamed both hosts by running the hostname command as root: hostname vboxfmw.oracle.com I also edited the /etc/sysconfig file and set the correct hostname in there. HOSTNAME=vboxfmw.oracle.com Changing the Network Settings I needed to change the network configuration to give the bridged network a fixed IP address.  I first explicitly set the MAC addresses of the two adapters, because the order of the virtual adapters in the VirtualBox Manager is not necessarily the same as the order of the adapters in the guest OS.  So I went in to the System->Preferences->Network Connections screen and explicitly set the “Device MAC address” for the two adapters. Having correctly mapped the Linux adapters to the VirtualBox adapters I then set the Bridged adapter to use fixed IP addressing rather than DHCP.  There is no need for additional routing or default gateways because we expect the two machine to be on the same LAN segment. Updating the Hosts File Having renamed the machines and reconfigured the network I then updated the /etc/hosts file to refer to the new machine name add a new line to the hosts file to provide an additional IP address for my server (the new fixed IP address) add a new line for the fixed IP address of the other virtual machine 10.0.3.101      vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.2.15       vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.3.102      vboxfmw.oracle.com      vboxfmw # Added by NetworkManager 127.0.0.1       localhost.localdomain   localhost ::1     vboxdb.oracle.com       vboxdb  localhost6.localdomain6 localhost6 To make sure everything takes effect I restarted the server. Reconfiguring the Database on the DB Machine Because we changed the hostname the listener and the EM console no longer start so I need to modify the listener.ora to use the new hostname and I also need to rebuild the EM configuration because it also relies on the hostname. I edited the $ORACLE_HOME/network/admin/listener.ora and changed the listening address to the new hostname:       (ADDRESS = (PROTOCOL = TCP)(HOST = vboxdb.oracle.com)(PORT = 1521)) After changing the listener.ora I was able to start the listener using: lsnrctl start I also had to reconfigure the EM database control.  I first deconfigured it using the command: emca -deconfig dbcontrol db -repos drop This drops the repository and removes any existing registered dbcontrols. I then re-configured it using the following command: emca -config dbcontrol db -repos create This creates the EM repository and then configures and starts dbcontrol. Now my database machine is ready so I can close it down and take a snapshot. Disabling the Database on the FMW Machine I set up the database to start automatically by creating a service called “dbora”.  On the FMW machine I do not need the database running so I can prevent it auto-starting by running the following command: chkconfig –del dbora Note that because I am using a snapshot it is not a waste of disk space to have the DB installed but not used.  As long as I don’t run it, it won’t cost me anything. I can now close the FMW machine down and take a snapshot. Creating a New Domain The FMW machine is now ready to create a new domain.  When creating the domain I can point it at the second machine which is running the database.  I can potentially run these machines on two separate physical machines as long as I have the original virtual machine available to both of the physical machines. Gotchas in Snapshotting VirtualBox does not support the concept of linked machines in a network like some virtualization technologies so when creating a snapshot it is a good idea to shut both VMs down and then take a snapshot on both of them.  This is because we want to keep the database in sync with the middleware.  One way to make sure that this happens would be to place all the domain configuration files on the database server via an NFS share, this would mean that all we would need to snapshot would be the database machine because that would hold all the state and configuration. The Sky’s the Limit We have covered a simple case of having just two machines.  I have a more complicated configuration in which two machine run a RAC database off the same base OS image, and two more machines run a SOA cluster based on the same OS image.  Just remember what machine holds state and what are the consequences of taking a snapshot.

    Read the article

  • Entrepreneur Needs Programmers, Architects, or Engineers?

    - by brand-newbie
    Hi guys (Ladies included). I posted on a related site, but THIS is the place to be. I want to build a specialized website. I am an entrepreneur and refining valuations now for venture capitalsists: i.e., determining how much cash I will need. I need help in understanding what human resources I need (i.e., Software Programmers, Architects, Engineers, etc.)??? Trust me, I have read most--if not all--of the threads here on the subject, and I can tell you I am no closer to the answer than ever. Here's my technology problem: The website will include (2) main components: a search engine (web crawler)...and a very large database. The search engine will not be a competitor to google, obviously; however, it "will" require bots to scour the web. The website will be, basically, a statistical database....where users should be able to pull up any statistic from "numerous" fields. Like any entrepreneur with a web-based vision, I'm "hoping" to get 100+ million registered users eventually. However, practically, we will start as small as feasible. As regards the technology (database architecture, servers, etc.), I do want quality, quality, quality. My priorities are speed, and the capaility to be scalable...so that if I "did" get globally large, we could do it without having to re-engineer anything. In other words, I want the back-end and the "infrastructure" to be scalable and professional....with emphasis on quality. I am not an IT professional. Although I've built several Joomla-based websites, I'm just a rookie who's only used minor javascript coding to modify a few plug-ins and components. The business I'm trying to create requires specialization and experts. I want to define the problem and let a capable team create the final product, and I will stay totally hands off. So who do you guys suggest I hire to run this thing? A software engineer? I was thinking I would need a "database engineer," a "systems security engineer", and maybe 2 or 3 "programmers" for the search engine. Also a web designer...and maybe a part-time graphic designer...everyone working under a single software engineer. What do you guys think? Who should I hire?...I REALLY need help from some people in the industry (YOU guys) on this. Is this project do-able in 6 months? If so, how many people will I need? Who exactly needs to head up this thing?...Senior software engineer, an embedded engineer, a CC++ engineer, a java engineer, a database engineer? And do I build this thing is Ruby or Java?

    Read the article

  • Why is -[UISwitch superlayer] being called?

    - by Jonathan Sterling
    I've got sort of a crazy thing going on where I have an NSProxy subclass standing in for a UISwitch. Messages sent to the proxy are forwarded to the switch. Please don't comment on whether or not this is a good design, because in context, it makes sense as an incredibly cool thing. The dealio is that when I try to add this object as an accessory view to a UITableViewCell, I get the following crash: Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[UISwitch superlayer]: unrecognized selector sent to instance 0x5889740' Yes, I could just set the proxy's target as the accessory view, but then I would have to keep track of the proxy so that I could release it at the right time. So, what I really want is to be able to have the proxy retained by the table cell, and released when it is removed from the view just like a normal accessory view. So, why is -[UISwitch superlayer] (a method which does not exist) being called, and how do I save the world?

    Read the article

  • Erb with Sinatra in ruby

    - by JP
    So I have a webserver I've built using sinatra, the meat of which goes like this: set :variable,"value" get '/' do erb :index end And, of course, the template in views/index.erb which looks something like this: <html> <!-- etc --> <ul> <% my_array.each do |thing| %> <%="Something: #{thing}, variable from sinatra: #{settings.variable}"%> <% end %> </ul> </html> If you try running code like this you'll notice that you can't access sinatra's settings variable from inside erb templates. Any ideas how I can achieve this while keeping its simplicity? Thanks in advance!

    Read the article

  • DataContractJsonSerializer produces list of hashes instead of hash

    - by Jacques
    I would expect a Dictionary object of the form: Dictionary<string,string> dict = new Dictionary<string,string>() {["blah", "bob"], ["blahagain", "bob"]}; to serialize into JSON in the form of: { "blah": "bob", "blahagain": "bob" } NOT [ { "key": "blah", "value": "bob" }, { "key": "blahagain", "value": "bob"}] What is the reason for what appears to be a monstrosity of a generic attempt at serializing collections? The DataContractJsonSerializer uses the ISerializable interface to produce this thing. It seems to me as though somebody has taken the XML output from ISerializable and mangled this thing out of it. Is there a way to override the default serialization used by .Net here? Could I just derive from Dictionary and override the Serialization methods? Posting to hear of any caveats or suggestions people might have.

    Read the article

  • Python-daemon doesn't kill its kids

    - by Brian M. Hunt
    When using python-daemon, I'm creating subprocesses likeso: import multiprocessing class Worker(multiprocessing.Process): def __init__(self, queue): self.queue = queue # we wait for things from this in Worker.run() ... q = multiprocessing.Queue() with daemon.DaemonContext(): for i in xrange(3): Worker(q) while True: # let the Workers do their thing q.put(_something_we_wait_for()) When I kill the parent daemonic process (i.e. not a Worker) with a Ctrl-C or SIGTERM, etc., the children don't die. How does one kill the kids? My first thought is to use atexit to kill all the workers, likeso: with daemon.DaemonContext(): workers = list() for i in xrange(3): workers.append(Worker(q)) @atexit.register def kill_the_children(): for w in workers: w.terminate() while True: # let the Workers do their thing q.put(_something_we_wait_for()) However, the children of daemons are tricky things to handle, and I'd be obliged for thoughts and input on how this ought to be done. Thank you.

    Read the article

  • Sprite movement

    - by Lemmons
    Hi everyone. I'm ripping my hair out over this one. For some odd reason I cannot find out / think of how to move a sprite in SFML and or SDL. The tutorials I've looked at for both libraries state nothing about this; so I assume that it's more of a C++ thing than a library thing. So I was wondering; how do you move a sprite? (When I say move, I mean have the sprite "glide" across the window at a set speed)

    Read the article

  • What does it take to prove this Contract.Requires?

    - by John Gietzen
    I have an application that runs through the rounds in a tournament, and I am getting a contract warning on this simplified code structure: public static void LoadState(IList<Object> stuff) { for(int i = 0; i < stuff.Count; i++) { // Contract.Assert(i < stuff.Count); // Contract.Assume(i < stuff.Count); Object thing = stuff[i]; Console.WriteLine(thing.ToString()); } } The warning is: contracts: requires unproven: index < @this.Count What am I doing wrong? How can I prove this on an IList<T>? Is this a bug in the static analyzer? How would I submit a bug report to Microsoft?

    Read the article

  • Surface RT: To Be Or Not To Be (Part 1)

    - by smehaffie
    So the Surface RT has been out for 9 months and Microsoft just declared a $900 million dollar write-down. So how did this happen and what does it mean for Microsoft’s efforts to break into the tablet market? I have been thinking a lot about most of the information below since the Surface product line was released. If you are looking for a “Microsoft Is Dead” story, then don’t read any further. But if you want an honest look at what I think led Microsoft to this point and what I think can be done to make Surface RT devices better, then please continue reading. What Led Microsoft To The $900 Million Write-Down Surface Unveiling:Microsoft totally missed the boat when they unveiled the Surface product line on June 18th, 2012. Microsoft should’ve been ready to post the specifications of both devices that night. Microsoft should’ve had a site up and running right after the event so people could pre-order the devices. This would have given them a good idea what the interest was in each device.  They could also have used this data to make a better estimate for the number of units to to have available for the launch and beyond.  They also lost out on taking advantage of the excitement generated by the Surface RT and Surface Pro announcement. They could have thrown in a free touch keyboard to anyone who pre-ordered. The advertising should have started right after the announcement and gotten bigger as launch day approached. Push for as many pre-order as possible and build excitement for the launch. Actual Launch (Surface RT): By this time all excitement was gone from the initial announcement, except for the Micorsoft faithful. Microsoft should have been ready to sell the Surface in as many markets as possible at launch. The limited market release was a real letdown for a lot of people.  A limited release right after the initial announce is understandable, but not at the official launch of the product. Microsoft overpriced the device and now they are lowering it to what it should have been to start with. The $349 price is within the range I suggested it should be at before pricing was announced. (Surface Tablets: The Price Must Be Right). Limited ordering options online was also a killer. User should have been able to buy the base unit of each device and then add on whatever keyboard they wanted to (this applies more to the Surface Pro).  There should have also been a place where users could order any additional add-ins that they wanted to buy (covers, extra power supplies, etc.) Marketing was better and the dancing “Click In” commercial was cool, but the ads comparing the iPad with Siri should have been on the air from day one of the announcement (or at least the launch).  Consumers want to know why you tablet is better, not just that is has a clickable keyboard and built-in kickstand. They could have also compared it to some of the other mid-range tablets if they had not overprices it to begin with. Stock Applications (Mail, People, Calendar, Music, Video, Reader and IE): This is where Microsoft really blew it. They had all the time in the world to make these applications the best of breed and instead we got applications that seemed thrown together.  Some updates have made these application better, but they are all still lacking in features that should have been there from day one. This did not help to enhance a new users experience any. ** I will admit that the applications that were data driven were first class citizen’s and that makes it even more perplexing why MS could knock it out of the park with the Weather, Travel, Finance, Bing, etc.) and fail so miserably on the core applications users would use the most on a tablet. Desktop on Tablet: The desktop just is so out of place on the tablet  I understand it was needed for Office but think it would have been better to not have the desktop in Windows RT, but instead open up the Office applications in full screen mode, in a desktop shell (same goes for  IE11).That way the user wouldn’t realize they are leaving Metro and going to the desktop. The other option would have been to just not include Office on Windows RT devices. Instead they could have made awesome Widows Store Apps for Word, Excel, OneNote and PowerPoint. In addition, they could have made the stock Mail, People, and Calendar applications contain all the functions that Outlook gives desktop users. Having some of the settings in desktop mode and others under “Change PC Settings” made Windows RT seemed unfinished and rushed to market. What Can Be Done To Make Windows RT Based Tablets Better (At least in my opinion) Either eliminate the desktop all together from Windows RT or at least make the user experience better by hiding the fact the user is running Office/IE in the desktop. Personally I ‘d like them to totally get rid of it and just make awesome Windows Store Application version of Word, Excel PowerPoint & OneNote.  This might also make the OS smaller and give the user more available disk space. I doubt there will ever be a Windows Store App versions of Office, but I still think it is a good idea. Make is so users can easily direct their documents, picture, videos and music to their extra storage and can access these files from the standard libraries.  A user should not have to create a VM on their microSD card or create symbolic links to get this to work properly. Most consumers would not be able to do this. Then users get frustrated when they run out or room on their main storage because nothing is automatically save to their microSD card when saved to libraries.  This is a major bug that needs to be fixed, otherwise Microsoft’s selling point of having a microSD slot is worthless. Allows users to uninstall and re-install any of the Office product that come with the Surface. That way people can free up storage space by uninstalling the Office applications they do not need. Everyone’s needs are different, so make the options flexible. Don’t take up storage space for applications the user will not use. Make the Core applications the “Cream of the Crop” Windows App Store applications. The should set the bar for all other Store applications. Improve performance as much as possible, if it seems to be sluggish on a tablet consumer will not buy it. They need to price the next line of Surface product very aggressive to undercut not only iPad but also Android low end tablets (Nook, Kindle Fire, and Nexus, etc.) Give developers incentives to write quality applications for the devices. Don’t reward developers for cranking out cookie cutter, low quality applications. I’d even suggest Microsoft consider implementing some new store certification guideline to stop these type of applications being published. Allow users to easily move the recover disk “partition between their microSD card and main storage. My Predictions for the Surface RT and Windows RT I honestly think even with all the missteps MS has made since the announcement  about the Surface product line, that they are on the right path. I was excited the Surface tablets when they were announced, and I still am. The truth be told, Windows 8 on a tablet (aka: Windows RT) is better than both iOS and Android. My nephew who is an Apple fan boy told me after he saw and used Windows 8 (he got the beta running on his iPad), that Windows 8 kicked Apples butt as a tablet OS. So there is hope for all Windows RT based tablets. I agree with my nephew and that is why whenever anyone asks me about my Surface, I love showing it off and recommend it. The 6 keys to gaining market share in the tablet market are; Aggressive pricing by both Microsoft and their OEM’s Good quality devices put out by Microsoft and their OEM’s (there are some out there, but not enough) Marketing, Marketing, Marketing from both Microsoft and their OEM’s (Need more ads showing why windows based tablets are better than iPads and Android tablets) Getting Widows tablets in retails stores all over, and giving sales people incentive to sell them. Consumers like to try electronics out before they buy them, and most will listen to what the sales person suggest. Microsoft needs sales people in retail stores directing people to buy windows based tablets over iPads and Android tablets. I think the Microsoft Stores within Best Buy is a good start, but they also need to get prominent displays in Walmart, Target, etc.. Release a smaller form factor Surface, Hopefully the 8”-10” next generation Surface is not a rumor. Make “Surface” the brand name for all Microsoft tablets and hybrid devices that they come out with. They cannot change the name with each new release.  Make Surface synonymous with quality, the same way that iPad  is for Apple. Well, that is my 2 cents on the subject. Let me know your thoughts by leaving a comment below. Soon to follow will be my thought on the Surface Pro, so keep an eye out for it. var addthis_pub="smehaffie"; var addthis_options="email, print, digg, slashdot, delicious, twitter, live, myspace, facebook, google, stumbleupon, newsvine";

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >