Search Results

Search found 870 results on 35 pages for 'allocation'.

Page 32/35 | < Previous Page | 28 29 30 31 32 33 34 35  | Next Page >

  • IE9, LightSwitch Beta 2 and Zune HD: A Study in Risk Management?

    - by andrewbrust
    Photo by parl, 'Risk.’ Under Creative Commons Attribution-NonCommercial-NoDerivs License This has been a busy week for Microsoft, and for me as well.  On Monday, Microsoft launched Internet Explorer 9 at South by Southwest (SXSW) in Austin, TX.  That evening I flew from New York to Seattle.  On Tuesday morning, Microsoft launched Visual Studio LightSwitch, Beta 2 with a Go-Live license, in Redmond, and I had the privilege of speaking at the keynote presentation where the announcement was made.  Readers of this blog know I‘m a fan of LightSwitch, so I was happy to tell the app dev tools partners in the audience that I thought the LightSwitch extensions ecosystem represented a big opportunity – comparable to the opportunity when Visual Basic 1.0 was entering its final beta roughly 20 years ago.  On Tuesday evening, I flew back to New York (and wrote most of this post in-flight). Two busy, productive days.  But there was a caveat that impacts the accomplishments, because Monday was also the day reports surfaced from credible news agencies that Microsoft was discontinuing its dedicated Zune hardware efforts.  While the Zune brand, technology and service will continue to be a component of Windows Phone and a piece of the Xbox puzzle as well, speculation is that Microsoft will no longer be going toe-to-toe with iPod touch in the portable music player market. If we take all three of these developments together (even if one of them is based on speculation), two interesting conclusions can reasonably be drawn, one good and one less so. Microsoft is doubling down on technologies it finds strategic and de-emphasizing those that it does not.  HTML 5 and the Web are strategic, so here comes IE9, and it’s a very good browser.  Try it and see.  Silverlight is strategic too, as is SQL Server, Windows Azure and SQL Azure, so here comes Visual Studio LightSwitch Beta 2 and a license to deploy its apps to production.  Downloads of that product have exceeded Microsoft’s projections by more than 50%, and the company is even citing analyst firms’ figures covering the number of power-user developers that might use it. (I happen to think the product will be used by full-fledged developers as well, but that’s a separate discussion.) Windows Phone is strategic too…I wasn’t 100% positive of that before, but the Nokia agreement has made me confident.  Xbox as an entertainment appliance is also strategic.  Standalone music players are not strategic – and even if they were, selling them has been a losing battle for Microsoft.  So if Microsoft has consolidated the Zune content story and the ZunePass subscription into Xbox and Windows Phone, it would make sense, and would be a smart allocation of resources.  Essentially, it would be for the greater good. But it’s not all good.  In this scenario, Zune player customers would lose out.  Unless they wanted to switch to Windows Phone, and then use their phone’s battery for the portable media needs, they’re going to need a new platform.  They’re going to feel abandoned.  Even if Zune lives, there have been other such cul de sacs for customers.  Remember SPOT watches?  Live Spaces?  The original Live Mesh?  Microsoft discontinued each of these products.  The company is to be commended for cutting its losses, as admitting a loss isn’t easy.  But Redmond won’t be well-regarded by the victims of those decisions.  Instead, it gets black marks. What’s the answer?  I think it’s a bit like the 1980’s New York City “don’t block the box” gridlock rules: don’t enter an intersection unless you see a clear path through it.  If the light turns red and you’re blocking the perpendicular traffic, that’s your fault in judgment.  You get fined and get points on your license and you don’t get to shrug it off as beyond your control.  Accountability is key.  The same goes for Microsoft.  If it decides to enter a market, it should see a reasonable path through success in that market. Switching analogies, Microsoft shouldn’t make investments haphazardly, and it certainly shouldn’t ask investors to buy into a high-risk fund that is sold as safe and which offers only moderate returns.  People won’t continue to invest with a fund manager with a track record of over-zealous, imprudent, sub-prime investments.  The same is true on the product side for Microsoft, and not just with music players and geeky wrist watches.  It’s true of Web browsers, and line-of-business app dev tools, and smartphones, and cloud platforms and operating systems too.  When Microsoft is casual about its own risk, it raises risk for its customers, and weakens its reputation, market share and credibility.  That doesn’t mean all risk is bad, but it does mean no product team’s risk should be taken lightly. For mutual fund companies, it’s the CEO’s job to give his fund managers autonomy, but to make sure they’re conforming to a standard of rational risk management.  Because all those funds carry the same brand, and many of them serve the same investors. The same goes for Microsoft, its product portfolio, its executive ranks and its product managers.

    Read the article

  • Android: OutofMemoryError: bitmap size exceeds VM budget with no reason I can see.

    - by Meymann
    Hi. I am having an OutOfMemory exception with a gallery over 600x800 pixels JPEG's. The environment I've been using Gallery with JPG images around 600x800 pixels. Since my content may be a bit more complex than just images, I have set each view to be a RelativeLayout that wraps ImageView with the JPG. In order to "speed up" the user experience I have a simple cache of 4 slots that prefetches (in a looper) about 1 image left and 1 image right to the displayed image and keeps them in a 4 slot HashMap. The platform I am using AVD of 256 RAM and 128 Heap Size, with a 600x800 screen. It also happens on an Entourage Edge target, except that with the device it's harder to debug. The problem I have been getting an exception: OutofMemoryError: bitmap size exceeds VM budget And it happens when fetching the fifth image. I have tried to change the size of my image cache, and it is still the same. The strange thing: There should not be a memory problem In order to make sure the heap limit is very far away from what I need, I have defined a dummy 8MB array in the beginning, and left it unreferenced so it's immediately dispatched. It is a member of the activity thread and is defined as following static { @SuppressWarnings("unused") byte dummy[] = new byte[ 8*1024*1024 ]; } The result is that the heap size is nearly 11MB and it's all free. Note I have added that trick after it began to crash. It makes OutOfMemory less frequent. Now, I am using DDMS. Just before the crash (does not change much after the crash), DDMS shows: ID Heap Size Allocated Free %Used #Objects 1 11.195 MB 2.428 MB 8.767 MB 21.69% 47,156 And in the detail table it shows: Type Count Total Size Smallest Largest Median Average free 1,536 8.739MB 16B 7.750MB 24B 5.825KB The largest block is 7.7MB. And yet the LogCat says: ERROR/dalvikvm-heap(1923): 925200-byte external allocation too large for this process. If you mind the relation of the median and the average, it is plausible to assume that most of the available blocks are very small. However, there is a block large enough for the bitmap, it's 7.7M. How come it is still not enough? Note: I recorded a heap trace. When looking at the amount of data allocated, it does not feel like more than 2M is allocated. It does match the free memory report by DDMS. Could it be that I experience some problem like heap-fragmentation? How do I solve/workaround the problem? Is the heap shared to all threads? Could it be that I interpret the DDMS readout in a wrong way, and there is really no 900K block to allocate? If so, can anybody please tell me where I can see that? Thanks a lot Meymann

    Read the article

  • 'NoneType' object has no attribute 'get' error using SQLAlchemy

    - by Az
    I've been trying to map an object to a database using SQLAlchemy but have run into a snag. Version info if handy: [OS: Mac OSX 10.5.8 | Python: 2.6.4 | SQLAlchemy: 0.5.8] The class I'm going to map: class Student(object): def __init__(self, name, id): self.id = id self.name = name self.preferences = collections.defaultdict(set) self.allocated_project = None self.allocated_rank = 0 def __repr__(self): return str(self) def __str__(self): return "%s %s" %(self.id, self.name) Background: Now, I've got a function that reads in the necessary information from a text database into these objects. The function more or less works and I can easily access the information from the objects. Before the SQLAlchemy code runs, the function will read in the necessary info and store it into the Class. There is a dictionary called students which stores this as such: students = {} students[id] = Student(<all the info from the various "reader" functions>) Afterwards, there is an "allocation" algorithm that will allocate projects to student. It does that well enough. The allocated_project remains as None if a student is unsuccessful in getting a project. SQLAlchemy bit: So after all this happens, I'd like to map my object to a database table. Using the documentation, I've used the following code to only map certain bits. I also begin to create a Session. from sqlalchemy import * from sqlalchemy.orm import * engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() students_table = Table('studs', metadata, Column('id', Integer, primary_key=True), Column('name', String) ) metadata.create_all(engine) mapper(Student, students_table) Session = sessionmaker(bind=engine) sesh = Session() Now after that, I was curious to see if I could print out all the students from my students dictionary. for student in students.itervalues(): print student What do I get but an error: Traceback (most recent call last): File "~/FYP_Tests/FYP_Tests.py", line 140, in <module> print student File "/~FYP_Tests/Parties.py", line 30, in __str__ return "%s %s" %(self.id, self.name) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/attributes.py", line 158, in __get__ return self.impl.get(instance_state(instance), instance_dict(instance)) AttributeError: 'NoneType' object has no attribute 'get' I'm at a loss as to how to resolve this issue, if it is an issue. If more information is required, please ask and I will provide it.

    Read the article

  • Server side Xforms form validation and integration into ASP.NET

    - by Nigel
    I have recently been investigating methods of creating web-based forms for an ASP.NET web application that can be edited and managed at runtime. For example an administrator might wish to add a new validation rule or a new set of fields. The holy grail would provide a means of specifying a form along with (potentially very complex) arbitrary validation rules, and allocation of data sources for each field. The specification would then be used to update the deployed form in the web application which would then validate submissions both on the client side and on the server side. My investigations led me to Xforms and a number of technologies that support it. One solution appears to be IBM Lotus Forms, but this requires a very large investment in terms of infrastructure, which makes it infeasible, although the forms designer may be useful as a stand-alone tool for creating the forms. I have also discounted browser plug-ins as the form must be publicly visible and cross-browser compliant. I have noticed that there are numerous javascript libraries that provide client side implementations given an Xforms schema. These would provide a partial solution but server side validation is still a requirement. Another option seems to involve the use of server side solutions such as the Java application Orbeon. Orbeon provides a tool for specifying the forms (although not as rich as Lotus Forms Designer), but the most interesting point is that it can translate an XForms schema into an XHTML form complete with validation. The fact that it is written in Java is not a big problem if it is possible to integrate with the existing ASP.NET application. So my question is whether anyone has done this before. It sounds like a problem that should have been solved but is inherently very complex. It seems possible to use an off-the-shelf tool to design the form and export it to an Xforms schema and xhtml form, and it seems possible to take that xforms schema and form and publish it using a client side library. What seems to be difficult is providing a means of validating the form submission on the server side and integrating the process nicely with .NET (although it seems the .NET community doesn't involve themselves with XForms; please correct me if I'm wrong on this count). I would be more than happy if a product provided something simple like a web service that could validate a submission against a schema. Maybe Orbeon does this but I'd be grateful if somebody in the know could point me in the right direction before I research it further. Many thanks.

    Read the article

  • Optimizing multiple dispatch notification algorithm in C#?

    - by Robert Fraser
    Sorry about the title, I couldn't think of a better way to describe the problem. Basically, I'm trying to implement a collision system in a game. I want to be able to register a "collision handler" that handles any collision of two objects (given in either order) that can be cast to particular types. So if Player : Ship : Entity and Laser : Particle : Entity, and handlers for (Ship, Particle) and (Laser, Entity) are registered than for a collision of (Laser, Player), both handlers should be notified, with the arguments in the correct order, and a collision of (Laser, Laser) should notify only the second handler. A code snippet says a thousand words, so here's what I'm doing right now (naieve method): public IObservable<Collision<T1, T2>> onCollisionsOf<T1, T2>() where T1 : Entity where T2 : Entity { Type t1 = typeof(T1); Type t2 = typeof(T2); Subject<Collision<T1, T2>> obs = new Subject<Collision<T1, T2>>(); _onCollisionInternal += delegate(Entity obj1, Entity obj2) { if (t1.IsAssignableFrom(obj1.GetType()) && t2.IsAssignableFrom(obj2.GetType())) obs.OnNext(new Collision<T1, T2>((T1) obj1, (T2) obj2)); else if (t1.IsAssignableFrom(obj2.GetType()) && t2.IsAssignableFrom(obj1.GetType())) obs.OnNext(new Collision<T1, T2>((T1) obj2, (T2) obj1)); }; return obs; } However, this method is quite slow (measurable; I lost ~2 FPS after implementing this), so I'm looking for a way to shave a couple cycles/allocation off this. I thought about (as in, spent an hour implementing then slammed my head against a wall for being such an idiot) a method that put the types in an order based on their hash code, then put them into a dictionary, with each entry being a linked list of handlers for pairs of that type with a boolean indication whether the handler wanted the order of arguments reversed. Unfortunately, this doesn't work for derived types, since if a derived type is passed in, it won't notify a subscriber for the base type. Can anyone think of a way better than checking every type pair (twice) to see if it matches? Thanks, Robert

    Read the article

  • Challenge: Neater way of currying or partially applying C#4's string.Join

    - by Damian Powell
    Background I recently read that .NET 4's System.String class has a new overload of the Join method. This new overload takes a separator, and an IEnumerable<T> which allows arbitrary collections to be joined into a single string without the need to convert to an intermediate string array. Cool! That means I can now do this: var evenNums = Enumerable.Range(1, 100) .Where(i => i%2 == 0); var list = string.Join(",",evenNums); ...instead of this: var evenNums = Enumerable.Range(1, 100) .Where(i => i%2 == 0) .Select(i => i.ToString()) .ToArray(); var list = string.Join(",", evenNums); ...thus saving on a conversion of every item to a string, and then the allocation of an array. The Problem However, being a fan of the functional style of programming in general, and method chaining in C# in particular, I would prefer to be able to write something like this: var list = Enumerable.Range(1, 100) .Where(i => i%2 == 0) .string.Join(","); This is not legal C# though. The closest I've managed to get is this: var list = Enumerable.Range(1, 100) .Where(i => i%2 == 0) .ApplyTo( Functional.Curry<string, IEnumerable<object>, string> (string.Join)(",") ); ...using the following extension methods: public static class Functional { public static TRslt ApplyTo<TArg, TRslt>(this TArg arg, Func<TArg, TRslt> func) { return func(arg); } public static Func<T1, Func<T2, TResult>> Curry<T1, T2, TResult>(this Func<T1, T2, TResult> func) { Func<Func<T1, T2, TResult>, Func<T1, Func<T2, TResult>>> curried = f => x => y => f(x, y); return curried(func); } } This is quite verbose, requires explicit definition of the parameters and return type of the string.Join overload I want to use, and relies upon C#4's variance features because we are defining one of the arguments as IEnumerable rather than IEnumerable. The Challenge Can you find a neater way of achieving this using the method-chaining style of programming?

    Read the article

  • Problem in retrieving the ini file through web page

    - by MalarN
    Hi All, I am using an .ini file to store some values and retrieve values from it using the iniparser. When I give (hardcode) the query and retrive the value through the command line, I am able to retrive the ini file and do some operation. But when I pass the query through http, then I am getting an error (file not found), i.e., the ini file couldn't be loaded. Command line : int main(void) { printf("Content-type: text/html; charset=utf-8\n\n"); char* data = "/cgi-bin/set.cgi?pname=x&value=700&url=http://IP/home.html"; //perform some operation } Through http: .html function SetValue(id) { var val; var URL = window.location.href; if(id =="set") { document.location = "/cgi-bin/set.cgi?pname="+rwparams+"&value="+val+"&url="+URL; } } .c int * Value(char* pname) { dictionary * ini ; char *key1 = NULL; char *key2 =NULL; int i =0; int val; ini = iniparser_load("file.ini"); if(ini != NULL) { //key for fetching the value key1 = (char*)malloc(sizeof(char)*50); if(key1 != NULL) { strcpy(key1,"ValueList:"); key2 = (char*)malloc(sizeof(char)*50); if(key2 != NULL) { strcpy(key2,pname); strcat(key1,key2); val = iniparser_getint(ini, key1, -1); if(-1 == val || 0 > val) { return 0; } } else { //error free(key1); return; } } else { printf("ERROR : Memory Allocation Failure "); return; } } else { printf("ERROR : .ini File Missing"); return; } iniparser_freedict(ini); free(key1); free(key2); return (int *)val; } void get_Value(char* pname,char* value) { int result =0; result = Value(pname); printf("Result : %d",result); } int main(void) { printf("Content-type: text/html; charset=utf-8\n\n"); char* data = getenv("QUERY_STRING"); //char* data = "/cgi-bin/set.cgi?pname=x&value=700&url=http://10.50.25.40/home.html"; //Parse to get the values seperately as parameter name, parameter value, url //Calling get_Value method to set the value get_Value(final_para,final_val); } * file.ini * [ValueList] x = 100; y = 70; When the request is sent through html page, I am always getting .ini file missing. If directly the request is sent from C file them it works fine. How to resolve this?

    Read the article

  • What is NSString in struct?

    - by 4thSpace
    I've defined a struct and want to assign one of its values to a NSMutableDictionary. When I try, I get a EXC_BAD_ACCESS. Here is the code: //in .h file typedef struct { NSString *valueOne; NSString *valueTwo; } myStruct; myStruct aStruct; //in .m file - (void)viewDidLoad { [super viewDidLoad]; aStruct.valueOne = @"firstValue"; } //at some later time [myDictionary setValue:aStruct.valueOne forKey:@"key1"]; //dies here with EXC_BAD_ACCESS This is the output in debugger console: (gdb) p aStruct.valueOne $1 = (NSString *) 0xf41850 Is there a way to tell what the value of aStruct.valueOne is? Since it is an NSString, why does the dictionary have such a problem with it? ------------- EDIT ------------- This edit is based on some comments below. The problem appears to be in the struct memory allocation. I have no issues assigning the struct value to the dictionary in viewDidLoad, as mentioned in one of the comments. The problem is that later on, I run into an issue with the struct. Just before the error, I do: po aStruct.oneValue Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_PROTECTION_FAILURE at address: 0x00000000 0x9895cedb in objc_msgSend () The program being debugged was signaled while in a function called from GDB. GDB has restored the context to what it was before the call. To change this behavior use "set unwindonsignal off" Evaluation of the expression containing the function (_NSPrintForDebugger) will be abandoned. This occurs just before the EXC_BAD_ACCESS: NSDateFormatter *formatter = [[NSDateFormatter alloc] init]; [formatter setDateFormat:@"MM-dd-yy_HH-mm-ss-A"]; NSString *date = [formatter stringFromDate:[NSDate date]]; [formatter release]; aStruct.valueOne = date; So the memory issue is most likely in my releasing of formatter. The date var has no retain. Should I instead be doing NSString *date = [[formatter stringFromDate:[NSDate date]] retain]; Which does work but then I'm left with a memory leak.

    Read the article

  • Placing a window near the system tray

    - by user227990
    I am writing a program that needs to set a window just above/below the traybar for gtk. I have tried using the 2 approaches that failed. One was using the gtk_status_icon_position_menu function and placing the window in the point where the user clicks (in the tray bar). The problem is that these solutions work in gnome(Linux) but not in Windows. In Linux they work because the window manager doesn't seem to allow placement of windows in the tray panel, honoring the closest possible. In Windows this doesn't happen and the window can go "out" of the screen which understandably is not desired. With this said i went out for a work around. My idea was to set the window in the location of mouse click and get the x and y coordinates of a normal window placement and with it's size check if it would be within the screen boundaries. If it was not make the correction. I have came up with the functions needed but for some reason the gdk_drawable_get_size(window-window ,&WindowWidth, &WindowHeight) and other similar functions only give the correct size value after the second run of the signal function. The result of the first run is just 1 to both size and width. (I have read the issue of X11 not giving correct results, but i think this is not it) event_button = (GdkEventButton *) event; if (event_button->button == 1) { if (active == 0) { gboolean dummy; gint WindowHeight, WindowWidth, WindowPosition[2]; GdkScreen *screen; gint ScreenHeight, ScreenWidth; dummy = FALSE; gtk_widget_show_all(window); gtk_window_present(GTK_WINDOW(window)); gtk_status_icon_position_menu(menu, &pos[X], &pos[Y], &dummy, statusicon); gtk_window_move(GTK_WINDOW(window), pos[X],pos[Y]); gdk_drawable_get_size(window->window ,&WindowWidth, &WindowHeight); screen = gtk_status_icon_get_screen(statusicon); ScreenWidth = gdk_screen_get_width(screen); ScreenHeight = gdk_screen_get_height(screen); g_print("Screen: %d, %d\nGeometry: %d, %d\n",ScreenWidth, ScreenHeight, WindowWidth, window->allocation.height); gtk_entry_set_text(GTK_ENTRY(entry),""); active = 1; return TRUE; } How can i do what i want in a portable way?

    Read the article

  • Criteria query returns hydrated object in SQLite but not SqlServer

    - by Berryl
    I have a method that returns a resource fully hydrated when the db is SQLite but when the identical code is used by SqlServer the object is not fully hydrated. I'll explain that with the code after some brief background. I my domain various otherwise unrelated things like an Employee or a Machine can be used as a Resource that can be allocated to. In the object model an example of this would be: /// <summary>Wraps a <see cref="StaffMember"/> in a <see cref="ResourceBase"/>. </summary> public class StaffMemberResource : ResourceBase { public virtual StaffMember StaffMember { get; private set; } public StaffMemberResource(StaffMember staffMember) { Check.RequireNotNull<StaffMember>(staffMember); base.BusinessId = staffMember.Number.ToString(); base.Name = staffMember.Name.ToString(); base.OrganizationName = staffMember.Department.Name; StaffMember = staffMember; } [UsedImplicitly] protected StaffMemberResource() { } } And in the db tables, there is a table per class inheritance where the ResourceBase has a discriminator and the id of the actual resource (ie, StaffMember) StaffMember - 1 ---- M- ResourceBase - 1 ----- M - Allocation The Code public override StaffMemberResource BuildResource(IActivityService activityService) { var sessionFactory = _GetSessionFactory(); var session = sessionFactory.GetCurrentSession(); StaffMemberResource result; using (var tx = session.BeginTransaction()) { var propertyName = ExprHelper.GetPropertyName<StaffMember>(x => x.Number); var staff = session.CreateCriteria<StaffMember>() .Add(Restrictions.Eq(propertyName, new EmployeeNumber(_testData.Resource_1.BusinessId))) .UniqueResult<StaffMember>(); if (staff == null) { ... build up a staff member result = new StaffMemberResource(staff); } else { ////////// var property = ExprHelper.GetPropertyName<StaffMemberResource>(x => x.StaffMember); result = session.CreateCriteria<StaffMemberResource>() .Add(Restrictions.Eq(property, staff)) .UniqueResult<StaffMemberResource>(); } /////////// tx.Commit(); } return result; } It's that second criteria query that works "properly" with SQLite but not with SqlServer. By properly I mean that the employee numer is translated into a ResourceBase.BusinessId, Name is flattened out into a ResourceBase.Name, etc. Does anyone know why this might be? Cheers, Berryl

    Read the article

  • Cpp some basic problems

    - by DevAno1
    Hello. My task was as follows : Create class Person with char*name and int age. Implement contructor using dynamic allocation of memory for variables, destructor, function init and friend function show. Then transform this class to header and cpp file and implement in other program. Ok so I've almost finished my Person class, but I get error after destructor. First question is how to write this properly ? #include <iostream> using namespace std; class Person { char* name; int age; public: int * take_age(); Person(){ int size=0; cout << "Give length of char*" << endl; cin >> size; name = new char[size]; age = 0; } ~Person(){ cout << "Destroying resources" << endl; delete *[] name; delete * take_age(); } friend void(Person &p); int * Person::take_age(){ return age; } void init(char* n, int a) { name = n; age = a; } void show(Person &p){ cout << "Name: " << p.name << "," << "age: " << p.age << endl; } }; int main(void) { Person *p = new Person; p->init("Mary", 25); p.show(); system("PAUSE"); return 0; } And now with header/implementation part : - do I need to introduce constructor in header/implementation files ? If yes - how? - my show() function is a friendly function. Should I take it into account somehow ? I already failed to return this task on my exam, but still I'd like to know how to implement it.

    Read the article

  • SQLAlchemy unsupported type error - and table design issues?

    - by Az
    Hi there, back again with some more SQLAlchemy shenanigans. Let me step through this. My table is now set up as so: engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() students_table = Table('studs', metadata, Column('sid', Integer, primary_key=True), Column('name', String), Column('preferences', Integer), Column('allocated_rank', Integer), Column('allocated_project', Integer) ) metadata.create_all(engine) mapper(Student, students_table) Fairly simple, and for the most part I've been enjoying the ability to query almost any bit of information I want provided I avoid the error cases below. The class it is mapped from is: class Student(object): def __init__(self, sid, name): self.sid = sid self.name = name self.preferences = collections.defaultdict(set) self.allocated_project = None self.allocated_rank = 0 def __repr__(self): return str(self) def __str__(self): return "%s %s" %(self.sid, self.name) Explanation: preferences is basically a set of all the projects the student would prefer to be assigned. When the allocation algorithm kicks in, a student's allocated_project emerges from this preference set. Now if I try to do this: for student in students.itervalues(): session.add(student) session.commit() It throws two errors, one for the allocated_project column (seen below) and a similar error for the preferences column: sqlalchemy.exc.InterfaceError: (InterfaceError) Error binding parameter 4 - probably unsupported type. u'INSERT INTO studs (sid, name, allocated_rank, allocated_project) VALUES (?, ?, ?, ?, ?, ?, ?)' [1101, 'Muffett,M.', 1, 888 Human-spider relationships (Supervisor id: 123)] If I go back into my code I find that, when I'm copying the preferences from the given text files, it actually refers to the Project class which is mapped to a dictionary, using the unique project id's (pid) as keys. Thus, as I iterate through each student via their rank and to the preferences set, it adds not a project id, but the reference to the project id from the projects dictionary. students[sid].preferences[int(rank)].add(projects[int(pid)]) Now this is very useful to me since I can find out all I want to about a student's preferred projects without having to run another check to pull up information about the project id. The form you see in the error has the object print information passed as: return "%s %s (Supervisor id: %s)" %(self.proj_id, self.proj_name, self.proj_sup) My questions are: I'm trying to store an object in a database field aren't I? Would the correct way then, be copying the project information (project id, name, etc) into its own table, referenced by the unique project id? That way I can just have the project id field for one of the student tables just be an integer id and when I need more information, just join the tables? So and so forth for other tables? If the above makes sense, then how does one maintain the relationship with a column of information in one table which is a key index on another table? Does this boil down into a database design problem? Are there any other elegant ways of accomplishing this? Apologies if this is a very long-winded question. It's rather crucial for me to solve this, so I've tried to explain as much as I can, whilst attempting to show that I'm trying (key word here sadly) to understand what could be going wrong.

    Read the article

  • Cleaning up a dynamic array of Objects in C++

    - by Dr. Monkey
    I'm a bit confused about handling an array of objects in C++, as I can't seem to find information about how they are passed around (reference or value) and how they are stored in an array. I would expect an array of objects to be an array of pointers to that object type, but I haven't found this written anywhere. Would they be pointers, or would the objects themselves be laid out in memory in an array? In the example below, a custom class myClass holds a string (would this make it of variable size, or does the string object hold a pointer to a string and therefore take up a consistent amount of space. I try to create a dynamic array of myClass objects within a myContainer. In the myContainer.addObject() method I attempt to make a bigger array, copy all the objects into it along with a new object, then delete the old one. I'm not at all confident that I'm cleaning up my memory properly with my destructors - what improvements could I make in this area? class myClass { private string myName; public unsigned short myAmount; myClass(string name, unsigned short amount) { myName = name; myAmount = amount; } //Do I need a destructor here? I don't think so because I don't do any // dynamic memory allocation within this class } class myContainer { int numObjects; myClass * myObjects; myContainer() { numObjects = 0; } ~myContainer() { //Is this sufficient? //Or do I need to iterate through myObjects and delete each // individually? delete [] myObjects; } void addObject(string name, unsigned short amount) { myClass newObject = new myClass(name, amount); myClass * tempObjects; tempObjects = new myClass[numObjects+1]; for (int i=0; i<numObjects; i++) tempObjects[i] = myObjects[i]); tempObjects[numObjects] = newObject; numObjects++; delete newObject; //Will this delete all my objects? I think it won't. //I'm just trying to delete the old array, and have the new array hold // all the objects plus the new object. delete [] myObjects; myObjects = tempObjects; } }

    Read the article

  • Are there any platforms where using structure copy on an fd_set (for select() or pselect()) causes p

    - by Jonathan Leffler
    The select() and pselect() system calls modify their arguments (the 'struct fd_set *' arguments), so the input value tells the system which file descriptors to check and the return values tell the programmer which file descriptors are currently usable. If you are going to call them repeatedly for the same set of file descriptors, you need to ensure that you have a fresh copy of the descriptors for each call. The obvious way to do that is to use a structure copy: struct fd_set ref_set_rd; struct fd_set ref_set_wr; struct fd_set ref_set_er; ... ...code to set the reference fd_set_xx values... ... while (!done) { struct fd_set act_set_rd = ref_set_rd; struct fd_set act_set_wr = ref_set_wr; struct fd_set act_set_er = ref_set_er; int bits_set = select(max_fd, &act_set_rd, &act_set_wr, &act_set_er, &timeout); if (bits_set > 0) { ...process the output values of act_set_xx... } } My question: Are there any platforms where it is not safe to do a structure copy of the struct fd_set values as shown? I'm concerned lest there be hidden memory allocation or anything unexpected like that. (There are macros/functions FD_SET(), FD_CLR(), FD_ZERO() and FD_ISSET() to mask the internals from the application.) I can see that MacOS X (Darwin) is safe; other BSD-based systems are likely to be safe, therefore. You can help by documenting other systems that you know are safe in your answers. (I do have minor concerns about how well the struct fd_set would work with more than 8192 open file descriptors - the default maximum number of open files is only 256, but the maximum number is 'unlimited'. Also, since the structures are 1 KB, the copying code is not dreadfully efficient, but then running through a list of file descriptors to recreate the input mask on each cycle is not necessarily efficient either. Maybe you can't do select() when you have that many file descriptors open, though that is when you are most likely to need the functionality.) There's a related SO question - asking about 'poll() vs select()' which addresses a different set of issues from this question.

    Read the article

  • Implications of trying to double free memory space in C

    - by SidNoob
    Here' my piece of code: #include <stdio.h> #include<stdlib.h> struct student{ char *name; }; int main() { struct student s; s.name = malloc(sizeof(char *)); // I hope this is the right way... printf("Name: "); scanf("%[^\n]", s.name); printf("You Entered: \n\n"); printf("%s\n", s.name); free(s.name); // This will cause my code to break } All I know is that dynamic allocation on the 'heap' needs to be freed. My question is, when I run the program, sometimes the code runs successfully. i.e. ./struct Name: Thisis Myname You Entered: Thisis Myname I tried reading this I've concluded that I'm trying to double-free a piece of memory i.e. I'm trying to free a piece of memory that is already free? (hope I'm correct here. If Yes, what could be the Security Implications of a double-free?) While it fails sometimes as its supposed to: ./struct Name: CrazyFishMotorhead Rider You Entered: CrazyFishMotorhead Rider *** glibc detected *** ./struct: free(): invalid next size (fast): 0x08adb008 *** ======= Backtrace: ========= /lib/tls/i686/cmov/libc.so.6(+0x6b161)[0xb7612161] /lib/tls/i686/cmov/libc.so.6(+0x6c9b8)[0xb76139b8] /lib/tls/i686/cmov/libc.so.6(cfree+0x6d)[0xb7616a9d] ./struct[0x8048533] /lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe6)[0xb75bdbd6] ./struct[0x8048441] ======= Memory map: ======== 08048000-08049000 r-xp 00000000 08:01 288098 /root/struct 08049000-0804a000 r--p 00000000 08:01 288098 /root/struct 0804a000-0804b000 rw-p 00001000 08:01 288098 /root/struct 08adb000-08afc000 rw-p 00000000 00:00 0 [heap] b7400000-b7421000 rw-p 00000000 00:00 0 b7421000-b7500000 ---p 00000000 00:00 0 b7575000-b7592000 r-xp 00000000 08:01 788956 /lib/libgcc_s.so.1 b7592000-b7593000 r--p 0001c000 08:01 788956 /lib/libgcc_s.so.1 b7593000-b7594000 rw-p 0001d000 08:01 788956 /lib/libgcc_s.so.1 b75a6000-b75a7000 rw-p 00000000 00:00 0 b75a7000-b76fa000 r-xp 00000000 08:01 920678 /lib/tls/i686/cmov/libc-2.11.1.so b76fa000-b76fc000 r--p 00153000 08:01 920678 /lib/tls/i686/cmov/libc-2.11.1.so b76fc000-b76fd000 rw-p 00155000 08:01 920678 /lib/tls/i686/cmov/libc-2.11.1.so b76fd000-b7700000 rw-p 00000000 00:00 0 b7710000-b7714000 rw-p 00000000 00:00 0 b7714000-b7715000 r-xp 00000000 00:00 0 [vdso] b7715000-b7730000 r-xp 00000000 08:01 788898 /lib/ld-2.11.1.so b7730000-b7731000 r--p 0001a000 08:01 788898 /lib/ld-2.11.1.so b7731000-b7732000 rw-p 0001b000 08:01 788898 /lib/ld-2.11.1.so bffd5000-bfff6000 rw-p 00000000 00:00 0 [stack] Aborted So why is it that my code does work sometimes? i.e. the compiler is not able to detect at times that I'm trying to free an already freed memory. Has it got to do something with my stack/heap size?

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • C - What is the proper format to allow a function to show an error was encountered?

    - by BrainSteel
    I have a question about what a function should do if the arguments to said function don't line up quite right, through no fault of the function call. Since that sentence doesn't make much sense, I'll offer my current issue. To keep it simple, here is the most relevant and basic function I have. float getYValueAt(float x, PHYS_Line line, unsigned short* error) *error = 0; if(x < line.start.x || x > line.end.x){ *error = 1; return -1; } if(line.slope.value != 0){ //line's equation: y - line.start.y = line.slope.value(x - line.start.x) return line.slope.value * (x - line.start.x) + line.start.y; } else if(line.slope.denom == 0){ if(line.start.x == x) return line.start.y; else{ *error = 1; return -1; } } else if(line.slope.num == 0){ return line.start.y; } } The function attempts to find the point on a line, given a certain x value. However, under some circumstances, this may not be possible. For example, on the line x = 3, if 5 is passed as a value, we would have a problem. Another problem arises if the chosen x value is not within the interval the line is on. For this, I included the error pointer. Given this format, a function call could work as follows: void foo(PHYS_Line some_line){ unsigned short error = 0; float y = getYValueAt(5, some_line, &error); if(error) fooey(); else do_something_with_y(y); } My question pertains to the error. Note that the value returned is allowed to be negative. Returning -1 does not ensure that an error has occurred. I know that it is sometimes preferred to use the following method to track an error: float* getYValueAt(float x, PHYS_Line line); and then return NULL if an error occurs, but I believe this requires dynamic memory allocation, which seems even less sightly than the solution I was using. So, what is standard practice for an error occurring?

    Read the article

  • Yet another Memory Leak Issue (memory is still gone when program terminates)- C program on SLES

    - by user1426181
    I run my C program on Suse Linux Enterprise that compresses several thousand large files (between 10MB and 100MB in size), and the program gets slower and slower as the program runs (it's running multi-threaded with 32 threads on a Intel Sandy Bridge board). When the program completes, and it's run again, it's still very slow. When I watch the program running, I see that the memory is being depleted while the program runs, which you would think is just a classic memory leak problem. But, with a normal malloc()/free() mismatch, I would expect all the memory to return when the program terminates. But, most of the memory doesn't get reclaimed when the program completes. The free or top command shows Mem: 63996M total, 63724M used, 272M free when the program is slowed down to a halt, but, after the termination, the free memory only grows back to about 3660M. When the program is rerun, the free memory is quickly used up. The top program only shows that the program, while running, is using at most 4% or so of the memory. I thought that it might be a memory fragmentation problem, but, I built a small test program that simulates all the memory allocation activity in the program (many randomized aspects were built in - size/quantity), and it always returns all the memory upon completion. So, I don't think that's it. Questions: Can there be a malloc()/free() mismatch that will lose memory permanently, i.e. even after the process completes? What other things in a C program (not C++) can cause permanent memory loss, i.e. after the program completes, and even the terminal window closes? Only a reboot brings the memory back. I've read other posts about files not being closed causing problems, but, I don't think I have that problem. Is it valid to be looking at top and free for the memory statistics, i.e. do they accurately describe the memory situation? They do seem to correspond to the slowness of the program. If the program only shows a 4% memory usage, will something like valgrind find this problem?

    Read the article

  • Better way to write an object generator for an RAII template class?

    - by Dan
    I would like to write an object generator for a templated RAII class -- basically a function template to construct an object using type deduction of parameters so the types don't have to be specified explicitly. The problem I foresee is that the helper function that takes care of type deduction for me is going to return the object by value, which will result in a premature call to the RAII destructor when the copy is made. Perhaps C++0x move semantics could help but that's not an option for me. Anyone seen this problem before and have a good solution? This is what I have: template<typename T, typename U, typename V> class FooAdder { private: typedef OtherThing<T, U, V> Thing; Thing &thing_; int a_; // many other members public: FooAdder(Thing &thing, int a); ~FooAdder(); void foo(T t, U u); void bar(V v); }; The gist is that OtherThing has a horrible interface, and FooAdder is supposed to make it easier to use. The intended use is roughly like this: FooAdder(myThing, 2) .foo(3, 4) .foo(5, 6) .bar(7) .foo(8, 9); The FooAdder constructor initializes some internal data structures. The foo and bar methods populate those data structures. The ~FooAdder dtor wraps things up and calls a method on thing_, taking care of all the nastiness. That would work fine if FooAdder wasn't a template. But since it is, I would need to put the types in, more like this: FooAdder<Abc, Def, Ghi>(myThing, 2) ... That's annoying, because the types can be inferred based on myThing. So I would prefer to create a templated object generator, similar to std::make_pair, that will do the type deduction for me. Something like this: template<typename T, typename U, typename V> FooAdder<T, U, V> AddFoo(Thing &thing, int a) { return FooAdder<T, U, V>(thing, a); } That seems problematic: because it returns by value, the stack temporary object will be destructed, which will cause the RAII dtor to run prematurely. One thought I had was to give FooAdder a copy ctor with move semantics, kinda like std::auto_ptr. But I would like to do this without dynamic memory allocation, so I thought the copy ctor could set a flag within FooAdder indicating the dtor shouldn't do the wrap-up. Like this: FooAdder(FooAdder &rhs) // Note: rhs is not const : thing_(rhs.thing_) , a_(rhs.a_) , // etc... lots of other members, annoying. , moved(false) { rhs.moved = true; } ~FooAdder() { if (!moved) { // do whatever it would have done } } Seems clunky. Anyone got a better way?

    Read the article

  • Determining if Memory Pointer is Valid - C++

    - by Jim Fell
    It has been my observation that if free( ptr ) is called where ptr is not a valid pointer to system-allocated memory, an access violation occurs. Let's say that I call free like this: LPVOID ptr = (LPVOID)0x12345678; free( ptr ); This will most definitely cause an access violation. Is there a way to test that the memory location pointed to by ptr is valid system-allocated memory? It seems to me that the the memory management part of the Windows OS kernel must know what memory has been allocated and what memory remains for allocation. Otherwise, how could it know if enough memory remains to satisfy a given request? (rhetorical) That said, it seems reasonable to conclude that there must be a function (or set of functions) that would allow a user to determine if a pointer is valid system-allocated memory. Perhaps Microsoft has not made these functions public. If Microsoft has not provided such an API, I can only presume that it was for an intentional and specific reason. Would providing such a hook into the system prose a significant threat to system security? Situation Report Although knowing whether a memory pointer is valid could be useful in many scenarios, this is my particular situation: I am writing a driver for a new piece of hardware that is to replace an existing piece of hardware that connects to the PC via USB. My mandate is to write the new driver such that calls to the existing API for the current driver will continue to work in the PC applications in which it is used. Thus the only required changes to existing applications is to load the appropriate driver DLL(s) at startup. The problem here is that the existing driver uses a callback to send received serial messages to the application; a pointer to allocated memory containing the message is passed from the driver to the application via the callback. It is then the responsibility of the application to call another driver API to free the memory by passing back the same pointer from the application to the driver. In this scenario the second API has no way to determine if the application has actually passed back a pointer to valid memory.

    Read the article

  • probelm with recv() on a tcp connection

    - by michael
    Hi, I am simulating TCP communication on windows in C I have sender and a receiver communicating. sender sends packets of specific size to receiver. receiver gets them and send an ACK for each packet it received back to the sender. If the sender didn't get a specific packet (they are numbered in a header inside the packet) it sends the packet again to the receiver. Here is the getPacket function on the receiver side: //get the next packet from the socket. set the packetSize to -1 //if it's the first packet. //return: total bytes read // return: 0 if socket has shutdown on sender side, -1 error, else number of bytes received int getPakcet(char *chunkBuff,int packetSize,SOCKET AcceptSocket){ int totalChunkLen = 0; int bytesRecv=-1; bool firstTime=false; if (packetSize==-1) { packetSize=MAX_PACKET_LENGTH; firstTime=true; } int needToGet=packetSize; do { char* recvBuff; recvBuff = (char*)calloc(needToGet,sizeof(char)); if(recvBuff == NULL){ fprintf(stderr,"Memory allocation problem\n"); return -1; } bytesRecv = recv(AcceptSocket, recvBuff, needToGet, 0); if (bytesRecv == SOCKET_ERROR){ fprintf(stderr,"recv() error %ld.\n", WSAGetLastError()); totalChunkLen=-1; return -1; } if (bytesRecv == 0){ fprintf(stderr,"recv(): socket has shutdown on sender side"); return 0; } else if(bytesRecv > 0) { memcpy(chunkBuff + totalChunkLen,recvBuff,bytesRecv); totalChunkLen+=bytesRecv; } needToGet-=bytesRecv; } while ((totalChunkLen < packetSize) && (!firstTime)); return totalChunkLen; } i use firstTime because for the first time the receiver doesn't know the normal package size that the sender is going to send to it, so i use a MAX_PACKET_LENGTH to get a package and then set the normal package size to the num of bytes i have received my problem is the last package. it's size is less than the package size so lets say last package size is 2 and the normal package size is 4. so recv() gets two bytes, continues to the while condition, then totalChunkLen < packetSize because 2<4 so it iterates the loop again and the gets stuck in recv() because it's blocking because the sender has nothing to send. on the sender side i can't close the connection because i didn't ACK back, so it's kind of a deadlock. receiver is stuck because it's waiting for more packages but sender has nothing to send. i don't want to use a timeout for recv() or to insert a special character to the package header to mark that it is the last one what can i do ? thanks

    Read the article

  • Java template classes using generator or similar?

    - by Hugh Perkins
    Is there some library or generator that I can use to generate multiple templated java classes from a single template? Obviously Java does have a generics implementation itself, but since it uses type-erasure, there are lots of situations where it is less than adequate. For example, if I want to make a self-growing array like this: class EasyArray { T[] backingarray; } (where T is a primitive type), then this isn't possible. This is true for anything which needs an array, for example high-performance templated matrix and vector classes. It should probably be possible to write a code generator which takes a templated class and generates multiple instantiations, for different types, eg for 'double' and 'float' and 'int' and 'String'. Is there something that already exists that does this? Edit: note that using an array of Object is not what I'm looking for, since it's no longer an array of primitives. An array of primitives is very fast, and uses only as much space a sizeof(primitive) * length-of-array. An array of object is an array of pointers/references, that points to Double objects, or similar, which could be scattered all over the place in memory, require garbage collection, allocation, and imply a double-indirection for access. Edit2: good god, voted down for asking for something that probably doesn't currently exist, but is technically possible and feasible? Does that mean that people looking for ways to improve things have already left the java community? Edit3: Here is code to show the difference in performance between primitive and boxed arrays: int N = 10*1000*1000; double[]primArray = new double[N]; for( int i = 0; i < N; i++ ) { primArray[i] = 123.0; } Object[] objArray = new Double[N]; for( int i = 0; i < N; i++ ) { objArray[i] = 123.0; } tic(); primArray = new double[N]; for( int i = 0; i < N; i++ ) { primArray[i] = 123.0; } toc(); tic(); objArray = new Double[N]; for( int i = 0; i < N; i++ ) { objArray[i] = 123.0; } toc(); Results: double[] array: 148 ms Double[] array: 4614 ms Not even close!

    Read the article

  • Inlining an array of non-default constructible objects in a C++ class

    - by porgarmingduod
    C++ doesn't allow a class containing an array of items that are not default constructible: class Gordian { public: int member; Gordian(int must_have_variable) : member(must_have_variable) {} }; class Knot { Gordian* pointer_array[8]; // Sure, this works. Gordian inlined_array[8]; // Won't compile. Can't be initialized. }; As even beginner C++ users know, the language guarantees that all members are initialized when constructing a class. And it doesn't trust the user to initialize everything in the constructor - one has to provide valid arguments to the constructors of all members before the body of the constructor even starts. Generally, that's a great idea as far as I'm concerned, but I've come across a situation where it would be a lot easier if I could actually have an array of non-default constructible objects. The obvious solution: Have an array of pointers to the objects. This is not optimal in my case, as I am using shared memory. It would force me to do extra allocation from an already contended resource (that is, the shared memory). The entire reason I want to have the array inlined in the object is to reduce the number of allocations. This is a situation where I would be willing to use a hack, even an ugly one, provided it works. One possible hack I am thinking about would be: class Knot { public: struct dummy { char padding[sizeof(Gordian)]; }; dummy inlined_array[8]; Gordian* get(int index) { return reinterpret_cast<Gordian*>(&inlined_array[index]); } Knot() { for (int x = 0; x != 8; x++) { new (get(x)) Gordian(x*x); } } }; Sure, it compiles, but I'm not exactly an experienced C++ programmer. That is, I couldn't possibly trust my hacks less. So, the questions: 1) Does the hack I came up with seem workable? What are the issues? (I'm mainly concerned with C++0x on newer versions of GCC). 2) Is there a better way to inline an array of non-default constructible objects in a class?

    Read the article

  • CUDA memory transfer issue

    - by Vaibhav Sundriyal
    I am trying to execute a code which first transfers data from CPU to GPU memory and vice-versa. In spite of increasing the volume of data, the data transfer time remains the same as if no data transfer is actually taking place. I am posting the code. #include <stdio.h> /* Core input/output operations */ #include <stdlib.h> /* Conversions, random numbers, memory allocation, etc. */ #include <math.h> /* Common mathematical functions */ #include <time.h> /* Converting between various date/time formats */ #include <cuda.h> /* CUDA related stuff */ #include <sys/time.h> __global__ void device_volume(float *x_d,float *y_d) { int index = blockIdx.x * blockDim.x + threadIdx.x; } int main(void) { float *x_h,*y_h,*x_d,*y_d,*z_h,*z_d; long long size=9999999; long long nbytes=size*sizeof(float); timeval t1,t2; double et; x_h=(float*)malloc(nbytes); y_h=(float*)malloc(nbytes); z_h=(float*)malloc(nbytes); cudaMalloc((void **)&x_d,size*sizeof(float)); cudaMalloc((void **)&y_d,size*sizeof(float)); cudaMalloc((void **)&z_d,size*sizeof(float)); gettimeofday(&t1,NULL); cudaMemcpy(x_d, x_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(y_d, y_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(z_d, z_h, nbytes, cudaMemcpyHostToDevice); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("\n %ld\t\t%f\t\t",nbytes,et); et=0.0; //printf("%f %d\n",seconds,CLOCKS_PER_SEC); // launch a kernel with a single thread to greet from the device //device_volume<<<1,1>>>(x_d,y_d); gettimeofday(&t1,NULL); cudaMemcpy(x_h, x_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(y_h, y_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(z_h, z_d, nbytes, cudaMemcpyDeviceToHost); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("%f\n",et); cudaFree(x_d); cudaFree(y_d); cudaFree(z_d); return 0; } Can anybody help me with this issue? Thanks

    Read the article

  • Windows: what is the difference between DEP always on and DEP opt-out with no exceptions?

    - by Peter Mortensen
    What is the difference between DEP always on ("/NoExecute=AlwaysOn" in boot.ini) and DEP opt-out ( "/NoExecute=OptOut" in boot.ini) with no exceptions? "no exceptions" = empty list of programs for which DEP does not apply. DEP = Data Execution Prevention (hardware). One would expect it to work the same way, but it makes a difference for some applications. E.g. for all versions of UltraEdit 14 (14.2). It crashes at startup for DEP always on, at least on Microsoft Windows XP Professional Edition x64 edition. (2010-03-11: this problem has been fixed with UltraEdit 15.2 and later.) Update 1: I think this difference is caused by the backdoors that Microsoft has put into hardware DEP for OptOut, according to Fabrice Roux (see below). In the case of IrfanView, for which Steve Gibson observed the same difference as I did for UltraEdit (see below), the difference is caused by a non-DEP aware EXE packer (ASPack) that Microsoft coded a backdoor for. Is there a difference between Windows XP, Windows Vista and Windows 7 ? Is there a difference between 32 bit and 64 bit versions of Windows ? Sources: From [http://blog.fabriceroux.com/index.php/2007/02/26/hardware_dep_has_a_backdoor?blog=1], "Hardware DEP has a backdoor" by Fabrice Roux. 2007-02-26. "IrfanView was not using any trick to evade DEP ... Microsoft just coded a backdoor used only in OPTOUT. Bascially Microsoft checks the executable header for a section matching one of the 3 strings. If one these strings is found, DEP will be turned OFF for this application by windows. ... 'aspack', 'pcle', 'sforce'" From [http://www.grc.com/sn/sn-078.htm], by Steve Gibson. "I can’t find any documentation on Microsoft’s site anywhere, because we’re seeing a difference between always-on and opt-out. That is, you would imagine that always-on mode would be the same as opting out if you weren’t having any opt-out programs. It turns out it’s not the case. For example ... the IrfanView file viewer ... runs fine in opt-out mode, even if it has not been opted out. But it won’t launch, Windows blocks it from launching ... in always-on mode." From [http://www.grc.com/sn/sn-083.htm], by Steve Gibson. "... IrfanView ... won’t run with DEP turned on. It’s because it uses an EXE packer, an executable compression program called ASPack. And it makes sense that it wouldn’t because naturally an executable compressor has got to decompress the executable, so it allocates a bunch of data memory into which it decompresses the compressed executable, and then it runs it. Well, it’s running a data allocation, which is exactly what DEP is designed to stop. On the other hand, UPX, which is actually the leading and most popular EXE compressor, it’s DEP- compatible because those guys realized, hey, when we allocate this memory, we should mark the pages as executable."

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35  | Next Page >