Search Results

Search found 11396 results on 456 pages for 'simply denis'.

Page 410/456 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • WP7 silverlight custom control using popup

    - by Miloud B.
    Hi guys, I'm creating a custom datepicker, I have a textbox, once clicked it opens a calendar within a popup. What I want to do is change the size of the popup so it shows my whole calendar, but I can't manage to change it..., I've tried using Height, Width, MinHeight, MinWidth... but it doesn't work, the popup keep showing with a fixed size. The thing is that my popup's parent property isn't evaluated since it has expression issues (according to debugger), so I'm sure my popup's parent isn't the main screen( say layout grid). How can I for example make my popup open within a specific context ? This part of my code isn't XAML, it's C# code only and it looks like: using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.Windows.Controls.Primitives; namespace CalendarBranch.components { public class wpDatePicker:TextBox { private CalendarPopup calendar; private Popup popup; public wpDatePicker() { this.calendar = new CalendarPopup(); this.popup = new Popup(); this.popup.Child = this.calendar; this.popup.Margin = new Thickness(0); this.MouseLeftButtonUp += new MouseButtonEventHandler(wpDatePicker_MouseLeftButtonUp); this.calendar.onDateSelect += new EventHandler(onDateSelected); this.IsReadOnly = true; } protected void wpDatePicker_MouseLeftButtonUp(object sender, MouseButtonEventArgs e) { this.popup.Height = this.calendar.Height; this.popup.Width = this.calendar.Width; this.popup.HorizontalAlignment = HorizontalAlignment.Center; this.popup.VerticalAlignment = VerticalAlignment.Center; this.popup.HorizontalOffset = 0; this.popup.VerticalOffset = 0; this.popup.MinHeight = this.calendar.Height; this.popup.MinWidth = this.calendar.Width; this.popup.IsOpen = true; } private void onDateSelected(Object sender, EventArgs ea) { this.Text = this.calendar.SelectedValue.ToShortDateString(); this.popup.IsOpen = false; } } } PS: the class Calendar is simply a UserControl that contains a grid with multiple columns, HyperLinkButtons and TextBlocks, so nothing special. Thank you in advance guys ;) Cheers Miloud B.

    Read the article

  • Storing simulation results in a persistent manner for Python?

    - by Az
    Background: I'm running multiple simuations on a set of data. For each session, I'm allocating projects to students. The difference between each session is that I'm randomising the order of the students such that all the students get a shot at being assigned a project they want. I was writing out some of the allocations in a spreadsheet (i.e. Excel) and it basically looked like this (tiny snapshot, actual table extends to a few thousand sessions, roughly 100 students). | | Session 1 | Session 2 | Session 3 | |----------|-----------|-----------|-----------| |Stu1 |Proj_AA |Proj_AB |Proj_AB | |----------|-----------|-----------|-----------| |Stu2 |Proj_AB |Proj_AA |Proj_AC | |----------|-----------|-----------|-----------| |Stu3 |Proj_AC |Proj_AC |Proj_AA | |----------|-----------|-----------|-----------| Now, the code that deals with the allocation currently stores a session in an object. The next time the allocation is run, the object is over-written. Thus what I'd really like to do is to store all the allocation results. This is important since I later need to derive from the data, information such as: which project Stu1 got assigned to the most or perhaps how popular Proj_AC was (how many times it was assigned / number of sessions). Question(s): What methods can I possibly use to basically store such session information persistently? Basically, each session output needs to add itself to the repository after ending and before beginning the next allocation cycle. One solution that was suggested by a friend was mapping these results to a relational database using SQLAlchemy. I kind of like the idea since this does give me an opportunity to delve into databases. Now the database structure I was recommended was: |----------|-----------|-----------| |Session |Student |Project | |----------|-----------|-----------| |1 |Stu1 |Proj_AA | |----------|-----------|-----------| |1 |Stu2 |Proj_AB | |----------|-----------|-----------| |1 |Stu3 |Proj_AC | |----------|-----------|-----------| |2 |Stu1 |Proj_AB | |----------|-----------|-----------| |2 |Stu2 |Proj_AA | |----------|-----------|-----------| |2 |Stu3 |Proj_AC | |----------|-----------|-----------| |3 |Stu1 |Proj_AB | |----------|-----------|-----------| |3 |Stu2 |Proj_AC | |----------|-----------|-----------| |3 |Stu3 |Proj_AA | |----------|-----------|-----------| Here, it was suggested that I make the Session and Student columns a composite key. That way I can access a specific record for a particular student for a particular session. Or I can merely get the entire allocation run for a particular session. Questions: Is the idea a good one? How does one implement and query a composite key using SQLAlchemy? What happens to the database if a particular student is not assigned a project (happens if all projects that he wants are taken)? In the code, if a student is not assigned a project, instead of a proj_id he simply gets None for that field/object. I apologise for asking multiple questions but since these are closely-related, I thought I'd ask them in the same space.

    Read the article

  • Flash - Ease out scrolling moviclip, scrolled by sensor on each side

    - by webfac
    I am hoping someone can shed some light on this issue for me... I have a movieclip that is scrolled by means of a 'sensor' on each side of the stage. The clip scrolls fine in both directions, however here is my problem: When the users mouse leaves the stage, the movie clip stops dead in it's tracks, and this does not provide a noce smooth effect. Looking at the code below, is there any way I could tell the animation to ease out when the users mouse leaves the stage rather than simply stop suddenly? Much appreciated! class Sensor extends MovieClip { public function Sensor() { } public function onEnterFrame() { // var active:Boolean; // is mouse within sensor? if ((_level2._xmouse >= this._x) && _level2._xmouse <= (this._x + this._width)) { active = true; } else { active = false; } if(active) { // which area of the sensor is it in? var relative_position:Number; var relative_difference:Number; relative_position = _level2._xmouse / this._width * 100.0; //_level2._logger.message("relative position is " + relative_position); if (!isNaN(relative_position)) { // depending on which area it is in, tend towards a particular adjustment if(relative_position > _level2._background_right_threshold) { relative_difference = Math.abs(relative_position - _level2._background_right_threshold); //relative_difference = 10; } else if(relative_position < _level2._background_left_threshold) { relative_difference = Math.abs(_level2._background_left_threshold - relative_position); relative_difference *= -1; } else { _level2._background_ideal_adjustment = 0; } var direction:Number; if(_level2._pan_direction == "left") { direction = 1; } else { direction = -1; } _level2._background_ideal_adjustment = direction * (relative_difference / 10) * _level2._pan_augmentation; //_level2._logger.message("ideal adjustment is " + _level2._background_ideal_adjustment); if (!isNaN(_level2._background_ideal_adjustment)) { // what is the difference between the ideal adjustment and the current adjustment? // add a portion of the difference to the adjustment: // this has the effect that the adjustment "tends towards" the ideal var difference:Number; difference = _level2._background_ideal_adjustment - _level2._background_adjustment; _level2._background_adjustment += (difference / _level2._background_tension); // calculate what the new background _x position would be var background_x:Number; var projected_x:Number; background_x = _level2["_background"]._x; projected_x = background_x += _level2._background_adjustment; //_level2._logger.message("projected _x is " + projected_x); // if the _x position is valid, change it if(projected_x > 0) projected_x = 0; if(projected_x < -(_level2["_background"]._width - Stage.width)) projected_x = -(_level2["_background"]._width - Stage.width); _level2["_background"]._x = projected_x; // i recommend that you move your other movieclips with code here } } } } }

    Read the article

  • How can I import one Gradle script into another?

    - by Ant
    Hi all, I have a complex gradle script that wraps up a load of functionality around building and deploying a number of netbeans projects to a number of environments. The script works very well, but in essence it is all configured through half a dozen maps holding project and environment information. I want to abstract the tasks away into another file, so that I can simply define my maps in a simple build file, and import the tasks from the other file. In this way, I can use the same core tasks for a number of projects and configure those projects with a simple set of maps. Can anyone tell me how I can import one gradle file into another, in a similar manner to Ant's task? I've trawled Gradle's docs to no avail so far. Additional Info After Tom's response below, I thought I'd try and clarify exactly what I mean. Basically I have a gradle script which runs a number of subprojects. However, the subprojects are all Netbeans projects, and come with their own ant build scripts, so I have tasks in gradle to call each of these. My problem is that I have some configuration at the top of the file, such as: projects = [ [name:"MySubproject1", shortname: "sub1", env:"mainEnv", cvs_module="mod1"], [name:"MySubproject2", shortname: "sub2", env:"altEnv", cvs_module="mod2"] ] I then generate tasks such as: projects.each({ task "checkout_$it.shortname" << { // Code to for example check module out from cvs using config from 'it'. } }) I have many of these sort of task generation snippets, and all of them are generic - they entirely depend on the config in the projects list. So what I want is a way to put this in a separate script and import it in the following sort of way: projects = [ [name:"MySubproject1", shortname: "sub1", env:"mainEnv", cvs_module="mod1"], [name:"MySubproject2", shortname: "sub2", env:"altEnv", cvs_module="mod2"] ] import("tasks.gradle") // This will import and run the script so that all tasks are generated for the projects given above. So in this example, tasks.gradle will have all the generic task generation code in, and will get run for the projects defined in the main build.gradle file. In this way, tasks.gradle is a file that can be used by all large projects that consist of a number of sub-projects with Netbeans ant build files.

    Read the article

  • How can I position some divs inside an unordered list so they line up with the root element of the l

    - by Ronedog
    I want to position all the divs to line up to the left on the same x coordinate so it looks nice. Notice the picture below, how based on the number of nested categories the div (and its contents) show up at slightly different x coordinates. I need to have the div's line up at exactly the same x coordinate no matter how deeply nested. Note, the bottom most category always has a div for the content, but that div has to be situated inside the last < li . I am using an unordered list to display the menu and thought the best solution would be to grab the root category (Cat 2, and mCat1) and obtain their left offset using jquery, then simply use that value to update the positioning of the div...but I couldn't seem to get it to work just right. I would appreciate any advice or help that you are willing to give. Heres the HTML <ul id="nav> <li>Cat 2 <ul> <li>sub cat2</li> </ul> </li> <li>mCat1 <ul> <li>Subcat A <ul> <li>Subcat A.1 <ul> <li>Annie</li> </ul> </li> </ul> </li> </ul> </li> </ul> Heres some jquery I tried (I have do insert the div inside this .each() loop in order to retrieve some values, but basically, this selector is grabbing the last < li in the menu tree and placing a div after it and that is the div that I want to position. the 245 value was something I was playing around with to see how I could get things to line up, and I know its out of wack, but the problem is still the same no matter what I do: $("#nav li:not(:has(li))").each(function () { var self = $(this); var position = self.offset(); var xLeft = Math.round(position.left)- 245; console.log("xLeft:", xLeft ); self.after( '<div id="' + self.attr('p_node') + '_p_cont_div" class="property_position" style="display:none; left:' + xLeft + 'px;" /> ' ); }); Heres the css: .property_position{ float:left; position: relative; top: 0px; padding-top:5px; padding-bottom:10px; }

    Read the article

  • What was "The Next Big Thing" when you were just starting out in programming?

    - by Andrew
    I'm at the beginning of my career and there are lots of things which are being touted as "The Next Big Thing". For example: Dependency Injection (Spring, etc) MVC (Struts, ASP.NET MVC) ORMs (Linq To SQL, Hibernate) Agile Software Development These things have probably been around for some time, but I've only just started out. And don't get me wrong, I think these things are great! So, what was "The Next Big Thing" when you were starting out? When was it? Were people sceptical of it at first? Why? Did you think it would catch on? Did it pan out and become widely accepted/used? If not, why not? EDIT It's been nearly a week since I first posted this question and I can safely say that I did not expect such explosive interest. I asked the question so that I could gain a perspective of what kinds of innovations in programming people thought were most important when they were starting out. At the time of writing this I have read ~95% of all answers. To answer a few questions, the "Next Big Things" I listed are ones that I am currently really excited about and that I had not really been exposed to until I started working. I'm hoping to implement some or all of these in the near future at my current workplace. To many people they are probably old news. In regards to the "is this a real question" debate, I can see that obviously hasn't been settled yet. I feel bad whenever I read a comment saying that these kinds of questions take away from the real meaning of SO. I'm not wholly convinced that it doesn't. On the other hand, I have seen a lot of comments saying what a great question it is. Anyway, I have chosen "The Internet!" as my answer to this question. I don't think (in my very humble opinion, and, it seems many SOers opinions) that many things related to programming can compare. Nowadays every business and their dog has a website which can do anything from simply supplying information to purchasing goods halfway around the world to updating your blog. And of course, all these businesses need people like us. Thanks to everyone for all the great answers!

    Read the article

  • Strange error inserting new relationship data in core-data

    - by michael
    My app will allow users to create a personalised list of events from a large list of events. I have a table view which simply displays these events, tapping on one of them takes the user to the event details view, which has a button "add to my events". In this detailed view I own the original event object, retrieved via an NSFetchedResultsController and passed to the detailed view (via a table cell, the same as the core data recipes sample). I have no trouble retrieving/displaying information from this "event". I am then trying to add it to the list of MyEvents represented by a one to many (inverse) relationship: This code: NSManagedObjectContext *context = [event managedObjectContext]; MyEvents *myEvents = (MyEvents *)[NSEntityDescription insertNewObjectForEntityForName:@"MyEvents" inManagedObjectContext:context]; [myEvents addEventObject:event];//ERROR And this code (suggested below): //would this add to or overwrite the "list" i am attempting to maintain NSManagedObjectContext *context = [event managedObjectContext]; MyEvents *myEvents = (MyEvents *)[NSEntityDescription insertNewObjectForEntityForName:@"MyEvents" inManagedObjectContext:context]; NSMutableSet *myEvent = [myEvents mutableSetValueForKey:@"event"]; [myEvent addObject:event]; //ERROR Bot produce (at the line indicated by //ERROR): *** -[NSComparisonPredicate evaluateWithObject:]: message sent to deallocated instance Seems I may have missed something fundamental. I cant glean any more information through the use of debugging tools, with my knowledge of them. 1) Is this a valid way to compile and store an editable list like this? 2) Is there a better way? 3) What could possibly be the deallocated instance in error? -- I have now modified the Event entity to have a to-many relationship called "myEvents" which referrers to itself. I can add Events to this fine, and logging the object shows the correct memory addresses appearing for the relationship after a [event addMyEventObject:event];. The same failure happens right after this however. I am still at a loss to understand what is going wrong. This is the backtrace #0 0x01f753a7 in ___forwarding___ () #1 0x01f516c2 in __forwarding_prep_0___ () #2 0x01c5aa8f in -[NSFetchedResultsController(PrivateMethods) _preprocessUpdatedObjects:insertsInfo:deletesInfo:updatesInfo:sectionsWithDeletes:newSectionNames:treatAsRefreshes:] () #3 0x01c5d63b in -[NSFetchedResultsController(PrivateMethods) _managedObjectContextDidChange:] () #4 0x0002e63a in _nsnote_callback () #5 0x01f40005 in _CFXNotificationPostNotification () #6 0x0002bef0 in -[NSNotificationCenter postNotificationName:object:userInfo:] () #7 0x01bbe17d in -[NSManagedObjectContext(_NSInternalNotificationHandling) _postObjectsDidChangeNotificationWithUserInfo:] () #8 0x01c1d763 in -[NSManagedObjectContext(_NSInternalChangeProcessing) _createAndPostChangeNotification:withDeletions:withUpdates:withRefreshes:] () #9 0x01ba25ea in -[NSManagedObjectContext(_NSInternalChangeProcessing) _processRecentChanges:] () #10 0x01bdfb3a in -[NSManagedObjectContext processPendingChanges] () #11 0x01bd0957 in _performRunLoopAction () #12 0x01f4d252 in __CFRunLoopDoObservers () #13 0x01f4c65f in CFRunLoopRunSpecific () #14 0x01f4bc48 in CFRunLoopRunInMode () #15 0x0273878d in GSEventRunModal () #16 0x02738852 in GSEventRun () #17 0x002ba003 in UIApplicationMain () Cheers.

    Read the article

  • Adding a generic image field onto a ModelForm in django

    - by Prairiedogg
    I have two models, Room and Image. Image is a generic model that can tack onto any other model. I want to give users a form to upload an image when they post information about a room. I've written code that works, but I'm afraid I've done it the hard way, and specifically in a way that violates DRY. Was hoping someone who's a little more familiar with django forms could point out where I've gone wrong. Update: I've tried to clarify why I chose this design in comments to the current answers. To summarize: I didn't simply put an ImageField on the Room model because I wanted more than one image associated with the Room model. I chose a generic Image model because I wanted to add images to several different models. The alternatives I considered were were multiple foreign keys on a single Image class, which seemed messy, or multiple Image classes, which I thought would clutter my schema. I didn't make this clear in my first post, so sorry about that. Seeing as none of the answers so far has addressed how to make this a little more DRY I did come up with my own solution which was to add the upload path as a class attribute on the image model and reference that every time it's needed. # Models class Image(models.Model): content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey('content_type', 'object_id') image = models.ImageField(_('Image'), height_field='', width_field='', upload_to='uploads/images', max_length=200) class Room(models.Model): name = models.CharField(max_length=50) image_set = generic.GenericRelation('Image') # The form class AddRoomForm(forms.ModelForm): image_1 = forms.ImageField() class Meta: model = Room # The view def handle_uploaded_file(f): # DRY violation, I've already specified the upload path in the image model upload_suffix = join('uploads/images', f.name) upload_path = join(settings.MEDIA_ROOT, upload_suffix) destination = open(upload_path, 'wb+') for chunk in f.chunks(): destination.write(chunk) destination.close() return upload_suffix def add_room(request, apartment_id, form_class=AddRoomForm, template='apartments/add_room.html'): apartment = Apartment.objects.get(id=apartment_id) if request.method == 'POST': form = form_class(request.POST, request.FILES) if form.is_valid(): room = form.save() image_1 = form.cleaned_data['image_1'] # Instead of writing a special function to handle the image, # shouldn't I just be able to pass it straight into Image.objects.create # ...but it doesn't seem to work for some reason, wrong syntax perhaps? upload_path = handle_uploaded_file(image_1) image = Image.objects.create(content_object=room, image=upload_path) return HttpResponseRedirect(room.get_absolute_url()) else: form = form_class() context = {'form': form, } return direct_to_template(request, template, extra_context=context)

    Read the article

  • Using Excel VBA to send emails. Problem with attachments becoming embedded by accident.

    - by Alexei
    Hi, I am having an issue with an Excel macro I wrote that is used by several users within my company. It is used to send numerous emails daily with attachments that are also Excel workbooks. The issue is that sometimes, instead of the file simply being attached as it should be, it becomes an embedded object. This embedded object is openable by users on the email within the company (after clicking through the "YOu are about to activate an embedded object that may contain viruses or be otherwise harmful to your computer. It is important to be certain that it is from a trustworthy source. Do you want to continue?"), but those outside of the company do not see it at all. The email appears to have no attachment at all. Curiously, this appears to happen randomly, and only on some computers. So if the list has 15 email lists and attachments, it seems to happen randomly to anywhere between 0 and 15 of the emails. To be clear, my objective is to send emails with regular attachments. Running Excel 2003, Outlook 2003, and Windows XP. See code below. Please help! Sub Email() Dim P As String Dim N As String Dim M As String Dim Subject As String Dim Addresses As String Dim olApp As Outlook.Application Dim olNewMail As Outlook.MailItem Application.DisplayAlerts = False M = ActiveWorkbook.Name For c = 2 To 64000 If Range("B" & c) = "" Then Exit For If UCase(Range("E" & c)) = "Y" Then Workbooks(M).Sheets("Main").Activate Subject = Range("A" & c) Addresses = Range("B" & c) P = Range("C" & c) N = Range("D" & c) If Right(P, 1) <> "\" Then P = P & "\" If Right(N, 4) <> ".xls" Then N = N & ".xls" Set olApp = New Outlook.Application Set olNewMail = olApp.CreateItem(olMailItem) With olNewMail .Display .Recipients.Add Addresses Application.Wait (Now + TimeValue("0:00:01")) SendKeys ("{TAB}") .Subject = Subject .Attachments.Add P + N .Send End With Set olNewMail = Nothing Set olApp = Nothing End If Next c Range("E2:E65536").ClearContents Application.DisplayAlerts = True End Sub

    Read the article

  • Purpose of Explicit Default Constructors

    - by Dennis Zickefoose
    I recently noticed a class in C++0x that calls for an explicit default constructor. However, I'm failing to come up with a scenario in which a default constructor can be called implicitly. It seems like a rather pointless specifier. I thought maybe it would disallow Class c; in favor of Class c = Class(); but that does not appear to be the case. Some relevant quotes from the C++0x FCD, since it is easier for me to navigate [similar text exists in C++03, if not in the same places] 12.3.1.3 [class.conv.ctor] A default constructor may be an explicit constructor; such a constructor will be used to perform default-initialization or value initialization (8.5). It goes on to provide an example of an explicit default constructor, but it simply mimics the example I provided above. 8.5.6 [decl.init] To default-initialize an object of type T means: — if T is a (possibly cv-qualified) class type (Clause 9), the default constructor for T is called (and the initialization is ill-formed if T has no accessible default constructor); 8.5.7 [decl.init] To value-initialize an object of type T means: — if T is a (possibly cv-qualified) class type (Clause 9) with a user-provided constructor (12.1), then the default constructor for T is called (and the initialization is ill-formed if T has no accessible default constructor); In both cases, the standard calls for the default constructor to be called. But that is what would happen if the default constructor were non-explicit. For completeness sake: 8.5.11 [decl.init] If no initializer is specified for an object, the object is default-initialized; From what I can tell, this just leaves conversion from no data. Which doesn't make sense. The best I can come up with would be the following: void function(Class c); int main() { function(); //implicitly convert from no parameter to a single parameter } But obviously that isn't the way C++ handles default arguments. What else is there that would make explicit Class(); behave differently from Class();? The specific example that generated this question was std::function [20.8.14.2 func.wrap.func]. It requires several converting constructors, none of which are marked explicit, but the default constructor is.

    Read the article

  • .NET 2.0 Process Elevation for App Installation

    - by Brian Gillespie
    We have an application written in both C++ and .NET that installs for all users in the Program Files folder. This application downloads new versions of itself (as MSI installers) and spawns the new installer process to replace itself. The install process as it exists today: Copy an install manager app (C#, .NET 2.0) to the temp directory. Call this 'Manager' Manager is executed with elevated privs per this article. The original application exits. Manager spawns the MSI installer (with elevated privs, since the copy is elevated) Manager spawns the new version of the app. The bug: The newly installed app is running in an elevated state. This causes problems I won't enumerate here. Ideally, the launch of the newly installed app would be run with the permissions of the original user. I can't figure out how to demote the app back to being the standard user after elevation. An inelegant hack: (yeah, yeah, this whole process is inelegant anyway) Copy the install manager to the temp directory Run the install manager with standard user privs. Lets call this instance 'LowlyManager'. Original application exits. LowlyManager spawns the app again, this time with elevated privs. Let's name this instance 'UpperManagement' UpperManagement spawns the installer UpperManagement exits gracefully, returning the exit code of the installer. LowlyManager interprets the error code from UpperManagement, and spawns the newly installed application. This time as the original invoker. Is there a better way to do this? (I've left out a bunch of other details before and after these steps that make the process smoother for the user, but this should be enough to understand the core of the problem I'm trying to solve.) Other requirements: We can't install as a per-user app The user shouldn't be presented with an authentication dialog box if UAC would have simply asked "are you sure you want to allow this?". I think this might kill a solution using WindowsImpersonationContext, but I'm not sure. The system needs to work on XP, Vista, and Windows 7 (even if there is a separate process for XP).

    Read the article

  • Help me stabilize this jRun configuration (CF9/Win2k3/IIS6)

    - by jfrobishow
    Not sure if this would be better suited for ServerFault, but since I am not an admin but a developer I figured I would try SO. We've been struggling to keep our multi-server configuration stable for quite some time now. At the end of last month we were running under CF 7.0.2 on a two servers setup (one instance each). At that point we managed to get our uptime to around 1 week per instance before they would restart by themselves. Since the beginning of the month we upgraded to CF 9 and we're back to square one with multi-restart a day. Our current configuration is 2 Win2k3 servers, running a cluster of 4 instances, 2 instances per server. At this point we are pretty certain this is due to improper JVM settings. We've been toying with them and while some are more stable than others we never quite got it right. From the default: java.args=-server -Xmx512m -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/ To currently: java.args=-server -Xmx896m -Dsun.io.useCanonCaches=false -XX:MaxPermSize=512m -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/ -verbose:gc -Xloggc:c:/Jrun4/logs/gc/gcInstance1b.log We have determined that we do need more than the default 512MB simply by monitoring with FusionReactor, on average our amount of memory consumed is hovering in the mid 300MB and can go up to low 700MB under heavy load. Most of the crash will be logged in jrun4/bin/hs_err_pid*.log always an "Out of swap space" I've attached links to the hs_err and garbage collector log file from yesterday at the bottom of the post. The relevant part is (I think) this: Heap PSYoungGen total 89856K, used 19025K [0x55490000, 0x5b6f0000, 0x5b810000) eden space 79232K, 16% used [0x55490000,0x561a64c0,0x5a1f0000) from space 10624K, 52% used [0x5ac90000,0x5b20e2f8,0x5b6f0000) to space 10752K, 0% used [0x5a1f0000,0x5a1f0000,0x5ac70000) PSOldGen total 460416K, used 308422K [0x23810000, 0x3f9b0000, 0x55490000) object space 460416K, 66% used [0x23810000,0x36541bb8,0x3f9b0000) PSPermGen total 107520K, used 106079K [0x03810000, 0x0a110000, 0x23810000) object space 107520K, 98% used [0x03810000,0x09fa7e40,0x0a110000) From it, I gather that its the PSPermGen that is full (most logs will show the same before a crash), which is why we increased MaxPermSize but the total still show as 107520K!??! No one here is a jRun expert, so any help or even ideas on what to try next would be greatly appreciated!! The log files: Sorry I know sendspace isn't the friendliest of places - if you have other host suggestion for log files let me know and I'll update the post (SO doesn't like them inline, it blows up the format of the post). The hs_err log file: http://www.sendspace.com/file/fgak8l The gc log: http://www.sendspace.com/file/w0r2ct

    Read the article

  • Turn class "Interfaceable"

    - by scooterman
    Hi folks, On my company system, we use a class to represent beans. It is just a holder of information using boost::variant and some serialization/deserialization stuff. It works well, but we have a problem: it is not over an interface, and since we use modularization through dlls, building an interface for it is getting very complicated, since it is used in almost every part of our app, and sadly interfaces (abstract classes ) on c++ have to be accessed through pointers, witch makes almost impossible to refactor the entire system. Our structure is: dll A: interface definition through abstract class dll B: interface implementation there is a painless way to achieve that (maybe using templates, I don't know) or I should forget about making this work and simply link everything with dll B? thanks Edit: Here is my example. this is on dll A BeanProtocol is a holder of N dataprotocol itens, wich are acessed by a index. class DataProtocol; class UTILS_EXPORT BeanProtocol { public: virtual DataProtocol& get(const unsigned int ) const { throw std::runtime_error("Not implemented"); } virtual void getFields(std::list<unsigned int>&) const { throw std::runtime_error("Not implemented"); } virtual DataProtocol& operator[](const unsigned int ) { throw std::runtime_error("Not implemented"); } virtual DataProtocol& operator[](const unsigned int ) const { throw std::runtime_error("Not implemented"); } virtual void fromString(const std::string&) { throw std::runtime_error("Not implemented"); } virtual std::string toString() const { throw std::runtime_error("Not implemented"); } virtual void fromBinary(const std::string&) { throw std::runtime_error("Not implemented"); } virtual std::string toBinary() const { throw std::runtime_error("Not implemented"); } virtual BeanProtocol& operator=(const BeanProtocol&) { throw std::runtime_error("Not implemented"); } virtual bool operator==(const BeanProtocol&) const { throw std::runtime_error("Not implemented"); } virtual bool operator!=(const BeanProtocol&) const { throw std::runtime_error("Not implemented"); } virtual bool operator==(const char*) const { throw std::runtime_error("Not implemented"); } virtual bool hasKey(unsigned int field) const { throw std::runtime_error("Not implemented"); } }; the other class (named GenericBean) implements it. This is the only way I've found to make this work, but now I want to turn it in a truly interface and remove the UTILS_EXPORT (which is an _declspec macro), and finally remove the forced linkage of B with A.

    Read the article

  • Cocoa NSOutputStream send to a connection

    - by Chuck
    Hi, I am new to Cocoa, but managed to get a connection (to a FTP) up and running, and I've set up an eventhandler for the NSInputStream iStream to alert every response (which also works). What I manage to get is simply the hello message and a connection timeout 60 sec, closing control connection. After searching stackoverflow and finding a lot of NSOutputStream write problems (e.g. http://stackoverflow.com/questions/703729/how-to-use-nsoutputstreams-write-message) and a lot of confusion in my google hits, I figured I'd try to ask my own question: I've tried reading the developer.apple.com doc on OutputStream, but it seems almost impossible for me to send some data (in this case just a string) to the "connection" via the NSOutputStream oStream. - (IBAction) send_something: sender { const char *send_command_char = [@"USER foo" UTF8String]; send_command_buffer = [NSMutableData dataWithBytes:send_command_char length:strlen(send_command_char) + 1]; uint8_t *readBytes = (uint8_t *)[send_command_buffer mutableBytes]; NSInteger byteIndex = 0; readBytes += byteIndex; int data_len = [send_command_buffer length]; unsigned int len = ((data_len - byteIndex >= 1024) ? 1024 : (data_len-byteIndex)); uint8_t buf[len]; (void)memcpy(buf, readBytes, len); len = [oStream write:(const uint8_t *)buf maxLength:len]; byteIndex += len; } the above seems not to result in any useable events. typing it under NSStreamEventHasSpaceAvailable sometimes give a response if I spam the ftp by keep creating new connection instances and keep sending some command whenever oStream has free space. In other words, nothing "right" and so I'm still unclear how to properly send a command to the connection. Should I open - write - close every time i want to write to oStream (and thus to the ftp) and can I then expect a reply (hasBytesAvailable event on iStream)? For some reason I find it very difficult to find any clear tutorials on this matter. Seems like there are more than a few in the same position as me: unclear how to use oStream write? Any little bit that can help clear this up is greatly appreciated! If needed I can write the rest of the code. Chuck

    Read the article

  • Hibernate + Spring : cascade deletion ignoring non-nullable constraints

    - by E.Benoît
    Hello, I seem to be having one weird problem with some Hibernate data classes. In a very specific case, deleting an object should fail due to existing, non-nullable relations - however it does not. The strangest part is that a few other classes related to the same definition behave appropriately. I'm using HSQLDB 1.8.0.10, Hibernate 3.5.0 (final) and Spring 3.0.2. The Hibernate properties are set so that batch updates are disabled. The class whose instances are being deleted is: @Entity( name = "users.Credentials" ) @Table( name = "credentials" , schema = "users" ) public class Credentials extends ModelBase { private static final long serialVersionUID = 1L; /* Some basic fields here */ /** Administrator credentials, if any */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public AdminCredentials adminCredentials; /** Active account data */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public Account activeAccount; /* Some more reverse relations here */ } (ModelBase is a class that simply declares a Long field named "id" as being automatically generated) The Account class, which is one for which constraints work, looks like this: @Entity( name = "users.Account" ) @Table( name = "accounts" , schema = "users" ) public class Account extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials the account is linked to */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } And here is the AdminCredentials class, for which the constraints are ignored. @Entity( name = "admin.Credentials" ) @Table( name = "admin_credentials" , schema = "admin" ) public class AdminCredentials extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials linked with an administrative account */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } The code that attempts to delete the Credentials instances is: try { if ( account.validationKey != null ) { this.hTemplate.delete( account.validationKey ); } this.hTemplate.delete( account.languageSetting ); this.hTemplate.delete( account ); } catch ( DataIntegrityViolationException e ) { return false; } Where hTemplate is a HibernateTemplate instance provided by Spring, its flush mode having been set to EAGER. In the conditions shown above, the deletion will fail if there is an Account instance that refers to the Credentials instance being deleted, which is the expected behaviour. However, an AdminCredentials instance will be ignored, the deletion will succeed, leaving an invalid AdminCredentials instance behind (trying to refresh that instance causes an error because the Credentials instance no longer exists). I have tried moving the AdminCredentials table from the admin DB schema to the users DB schema. Strangely enough, a deletion-related error is then triggered, but not in the deletion code - it is triggered at the next query involving the table, seemingly ignoring the flush mode setting. I've been trying to understand this for hours and I must admit I'm just as clueless now as I was then.

    Read the article

  • Correct way of using/testing event service in Eclipse E4 RCP

    - by Thorsten Beck
    Allow me to pose two coupled questions that might boil down to one about good application design ;-) What is the best practice for using event based communication in an e4 RCP application? How can I write simple unit tests (using JUnit) for classes that send/receive events using dependency injection and IEventBroker ? Let’s be more concrete: say I am developing an Eclipse e4 RCP application consisting of several plugins that need to communicate. For communication I want to use the event service provided by org.eclipse.e4.core.services.events.IEventBroker so my plugins stay loosely coupled. I use dependency injection to inject the event broker to a class that dispatches events: @Inject static IEventBroker broker; private void sendEvent() { broker.post(MyEventConstants.SOME_EVENT, payload) } On the receiver side, I have a method like: @Inject @Optional private void receiveEvent(@UIEventTopic(MyEventConstants.SOME_EVENT) Object payload) Now the questions: In order for IEventBroker to be successfully injected, my class needs access to the current IEclipseContext. Most of my classes using the event service are not referenced by the e4 application model, so I have to manually inject the context on instantiation using e.g. ContextInjectionFactory.inject(myEventSendingObject, context); This approach works but I find myself passing around a lot of context to wherever I use the event service. Is this really the correct approach to event based communication across an E4 application? how can I easily write JUnit tests for a class that uses the event service (either as a sender or receiver)? Obviously, none of the above annotations work in isolation since there is no context available. I understand everyone’s convinced that dependency injection simplifies testability. But does this also apply to injecting services like the IEventBroker? This article describes creation of your own IEclipseContext to include the process of DI in tests. Not sure if this could resolve my 2nd issue but I also hesitate running all my tests as JUnit Plug-in tests as it appears impractible to fire up the PDE for each unit test. Maybe I just misunderstand the approach. This article speaks about “simply mocking IEventBroker”. Yes, that would be great! Unfortunately, I couldn’t find any information on how this can be achieved. All this makes me wonder whether I am still on a "good path" or if this is already a case of bad design? And if so, how would you go about redesigning? Move all event related actions to dedicated event sender/receiver classes or a dedicated plugin?

    Read the article

  • Help choosing authentication method

    - by Dima
    I need to choose an authentication method for an application installed and integrated in customers environment. There are two types of environments - windows and linux/unix. Application is user based, no web stuff, pure Java. The requirement is to authenticate users which will use my application against customer provided user base. Meaning, customer installs my app, but uses his own users to grant or deny access to my app. Typical, right? I have three options to consider and I need to pick up the one which would be a) the most flexible to cover most common modern environments and b) would take least effort while stay robust and standard. Option (1) - Authenticate locally managing user credentials in some local storage, e.g. file. Customer would then add his users to my application and it will then check the passwords. Simple, clumsy but would work. Customers would have to punch every user they want to grant access to my app using some UI we will have to provide. Lots of work for me, headache to the customer. Option (2) - Use LDAP authentication. Customers would tell my app where to look for users and I will walk their directory resolving names into user names and trying to bind with found password. This is better approach IMO, but more fragile because I will have to walk an unknown directory structure and who knows if this will be permitted everywhere. Would be harder to test since there are many LDAP implementation out there, last thing I want is drowning in this voodoo. Option(3) - Use plain Kerberos authentication. Customers would tell my app what realm (domain) and which KDC (key distribution center) to use. In ideal world these two parameters would be all I need to set while customers could use their own administration tools to configure domain and kdc. My application would simply delegate user credentials to this third party (using JAAS or Spring security) and consider success when third party is happy with them. I personally prefer #3, but not sure what surprises I might face. Would this cover windows and *nix systems entirely? Is there another option to consider?

    Read the article

  • NSKeyedUnarchiver chokes when trying to unarchive more than one object

    - by ajduff574
    We've got a custom matrix class, and we're attempting to archive and unarchive an NSArray containing four of them. The first seems to get unarchived fine (we can see that initWithCoder is called once), but then the program simply hangs, using 100% CPU. It doesn't continue or output any errors. These are the relevant methods from the matrix class (rows, columns, and matrix are our only instance variables): -(void)encodeWithCoder:(NSCoder*) coder { float temp[rows * columns]; for(int i = 0; i < rows; i++) { for(int j = 0; j < columns; j++) { temp[columns * i + j] = matrix[i][j]; } } [coder encodeBytes:(const void *)temp length:rows*columns*sizeof(float) forKey:@"matrix"]; [coder encodeInteger:rows forKey:@"rows"]; [coder encodeInteger:columns forKey:@"columns"]; } -(id)initWithCoder:(NSCoder *) coder { if (self = [super init]) { rows = [coder decodeIntegerForKey:@"rows"]; columns = [coder decodeIntegerForKey:@"columns"]; NSUInteger * len; *len = (unsigned int)(rows * columns * sizeof(float)); float * temp = (float * )[coder decodeBytesForKey:@"matrix" returnedLength:len]; matrix = (float ** )calloc(rows, sizeof(float*)); for (int i = 0; i < rows; i++) { matrix[i] = (float*)calloc(columns, sizeof(float)); } for(int i = 0; i < rows *columns; i++) { matrix[i / columns][i % columns] = temp[i]; } } return self; } And this is really all we're trying to do: NSArray * weightMatrices = [NSArray arrayWithObjects:w1,w2,w3,w4,nil]; [NSKeyedArchiver archiveRootObject:weightMatrices toFile:@"weights.archive"]; NSArray * newWeights = [NSKeyedUnarchiver unarchiveObjectWithFile:@"weights.archive"]; What's driving us crazy is that we can archive and unarchive a single matrix just fine. We've done so (successfully) with a matrix many times larger than these four combined.

    Read the article

  • Partial overriding in Java (or dynamic overriding while overloading)

    - by Lie Ryan
    If I have a parent-child that defines some method .foo() like this: class Parent { public void foo(Parent arg) { System.out.println("foo in Function"); } } class Child extends Parent { public void foo(Child arg) { System.out.println("foo in ChildFunction"); } } When I called them like this: Child f = new Child(); Parent g = f; f.foo(new Parent()); f.foo(new Child()); g.foo(new Parent()); g.foo(new Child()); the output is: foo in Parent foo in Child foo in Parent foo in Parent But, I want this output: foo in Parent foo in Child foo in Parent foo in Child I have a Child class that extends Parent class. In the Child class, I want to "partially override" the Parent's foo(), that is, if the argument arg's type is Child then Child's foo() is called instead of Parent's foo(). That works Ok when I called f.foo(...) as a Child; but if I refer to it from its Parent alias like in g.foo(...) then the Parent's foo(..) get called irrespective of the type of arg. As I understand it, what I'm expecting doesn't happen because method overloading in Java is early binding (i.e. resolved statically at compile time) while method overriding is late binding (i.e. resolved dynamically at compile time) and since I defined a function with a technically different argument type, I'm technically overloading the Parent's class definition with a distinct definition, not overriding it. But what I want to do is conceptually "partially overriding" when .foo()'s argument is a subclass of the parent's foo()'s argument. I know I can define a bucket override foo(Parent arg) in Child that checks whether arg's actual type is Parent or Child and pass it properly, but if I have twenty Child, that would be lots of duplication of type-unsafe code. In my actual code, Parent is an abstract class named "Function" that simply throws NotImplementedException(). The children includes "Polynomial", "Logarithmic", etc and .foo() includes things like Child.add(Child), Child.intersectionsWith(Child), etc. Not all combination of Child.foo(OtherChild) are solvable and in fact not even all Child.foo(Child) is solvable. So I'm best left with defining everything undefined (i.e. throwing NotImplementedException) then defines only those that can be defined. So the question is: Is there any way to override only part the parent's foo()? Or is there a better way to do what I want to do?

    Read the article

  • Problems with contenteditable in Firefox

    - by Jonathan
    Hello, I am working on a Javascript WYSIWYG editor in Firefox. I am using a div with the contenteditable attribute set to true in order to accomplish this (I cannot use a contenteditable iframe for this particular project). This contenteditable div is nested in another div that is not contenteditable. I am encountering the following two problems when using execCommand to apply formatting, including font style and size, as well as bold, italic, and underline: When all text in the div is selected, execCommand simply does not work. execCommand works fine when only part of the text is selected, but does nothing when all of the text is selected. Applying formatting with no text selected yields unexpected results. For example, when calling execCommand('bold') with no text selected and then typing results in bold text being typed until a spacebar is inserted, at which point the bold formatting is lost (until another space is inserted, interestingly enough; then the text becomes bold again). To see what I mean, please try running the following HTML code in Firefox 3: <html> <head><title></title></head> <body> <button onClick="execCommand('bold', false, null);">Bold</button> <div style="width: 300px; border: 1px solid #000000;"> <div contenteditable="true">Some editable text</div> </div> </body> </html> Please try the following: Select the word "Some" only. Click the Bold button. This will make the text bold, as expected. Select the entire phrase "Some editable text" (either manually or using CTRL-A). Click the Bold button. Nothing happens. This demonstrates the first bug shown above. Hit the backspace key to clear the div. Click the Bold button and begin typing. Type a few words with spaces. This will demonstrate the second bug. Any ideas on what could be causing these problems and how to work around them would be greatly appreciated!

    Read the article

  • "Simple" sort a nested array using array_multisort or native PHP functions instead of my own foreach loop

    - by Ana Ban
    I have the following array of days of the week, with each day having hours of the day (the whole array represents the schedule of a part-time employee): Array ( [7] => Array ( [0] => 15 [1] => 14 [2] => 13 [3] => 11 [4] => 12 [5] => 10 ) [1] => Array ( [0] => 10 [1] => 13 [2] => 12 ) [6] => Array ( [0] => 14 ) [3] => Array ( [0] => 4 [1] => 5 [2] => 6 ) ) and I simply need to: sort asc each sub-array (2nd dimension) - no need to maintain the numeric keys, values are integers sort asc the 1st dimension and maintain the numeric, integer keys ie: Array ( [1] => Array ( [0] => 10 [1] => 12 [2] => 13 ) [3] => Array ( [0] => 4 [1] => 5 [2] => 6 ) [6] => Array ( [0] => 14 ) [7] => Array ( [0] => 10 [1] => 11 [2] => 12 [3] => 13 [4] => 14 [5] => 15 ) ) Additional info: only the keys of the 1st dimension and the values of the 2nd dimension (and of course their association) are meaningful to my use-case the 1st dimension can have at most 7 values, ranging from 1-7 (days of the week), and will have at least 1 value (1 day) the 2nd dimension can have at most 24 values, ranging from 0-23 (hours of each day), and will have at least 1 value (1 hour per day) I know I can do this with a foreach on the whole ksorted array and sort each 2nd dimension array: ksort($sched); foreach ($sched as &$array) sort($array); unset($array); but I was hoping I could achieve this with native php array function(s) instead. My search led me to try array_multisort(array_values($array), array_keys($array), $array) but I just can't make it work.

    Read the article

  • How can I test caching and cache busting?

    - by Nathan Long
    In PHP, I'm trying to steal a page from the Rails playbook (see 'Using Asset Timestamps' here): By default, Rails appends assets' timestamps to all asset paths. This allows you to set a cache-expiration date for the asset far into the future, but still be able to instantly invalidate it by simply updating the file (and hence updating the timestamp, which then updates the URL as the timestamp is part of that, which in turn busts the cache). It‘s the responsibility of the web server you use to set the far-future expiration date on cache assets that you need to take advantage of this feature. Here‘s an example for Apache: # Asset Expiration ExpiresActive On <FilesMatch "\.(ico|gif|jpe?g|png|js|css)$"> ExpiresDefault "access plus 1 year" </FilesMatch> If you look at a the source for a Rails page, you'll see what they mean: the path to a stylesheet might be "/stylesheets/scaffold.css?1268228124", where the numbers at the end are the timestamp when the file was last updated. So it should work like this: The browser says 'give me this page' The server says 'here, and by the way, this stylesheet called scaffold.css?1268228124 can be cached for a year - it's not gonna change.' On reloads, the browser says 'I'm not asking for that css file, because my local copy is still good.' A month later, you edit and save the file, which changes the timestamp, which means that the file is no longer called scaffold.css?1268228124 because the numbers change. When the browser sees that, it says 'I've never seen that file! Give me a copy, please.' The cache is 'busted.' I think that's brilliant. So I wrote a function that spits out stylesheet and javascript tags with timestamps appended to the file names, and I configured Apache with the statement above. Now: how do I tell if the caching and cache busting are working? I'm checking my pages with two plugins for Firebug: Yslow and Google Page Speed. Both seem to say that my files are caching: "Add expires headers" in Yslow and "leverage browser caching" in Page Speed are both checked. But when I look at the Page Speed Activity, I see a lot of requests and waiting and no 'cache hits'. If I change my stylesheet and reload, I do see the change immediately. But I don't know if that's because the browser never cached in the first place or because the cache is busted. How can I tell?

    Read the article

  • C++ EZWindows Linker Errors when trying to run demos

    - by Brent Nash
    I'm attempting to download and use the EZWindows ( http://www.cs.virginia.edu/c++programdesign/software/ ) SPARC installation (the http://www.cs.virginia.edu/c++programdesign/software/EzWindows2a-SPARC.tar.gz file). When trying to build some of the examples that come with it, I'm getting some linker errors that I just can't figure out. Here's the result of the uname -a command on the machine I'm running on (on which I am NOT an administrator): SunOS AAA.BBB.edu 5.10 Generic_138888-07 sun4v sparc SUNW,T5240 And here is the result of the g++ -v command: gcc version 2.95.2 19991024 (release) If you untar/unzip the package, I'm trying to compile the example in samples/chap03/lawn by simply doing "gmake" in that directory, here's what I get. Here's the error I get: bash-3.00$ gtar xfz EzWindows2a-SPARC.tar.gz gtar: Removing leading `./' from member names bash-3.00$ cd chap03/lawn bash-3.00$ gmake clean ; gmake rm -f *.o *~ lawn make lawn g++ -I/X11.6/include -I../../EzWindows/include -c prog3-5.cpp prog3-5.cpp: In function `int ApiMain()': prog3-5.cpp:75: warning: initialization to `long int' from `const float' prog3-5.cpp:85: warning: initialization to `int' from `float' prog3-5.cpp:86: warning: initialization to `int' from `float' g++ -o lawn prog3-5.o -L/X11.6/lib -R/X11.6/lib -lX11 -lsocket -L../../EzWindows/lib -lezwin -lXpm ld: warning: symbol `clog' has differing types: (file /usr/usc/gnu/gcc/2.95.2/lib/gcc-lib/sparc-sun-solaris2.6/2.95.2/libstdc++.so type=OBJT; file /lib/libm.so type=FUNC); /usr/usc/gnu/gcc/2.95.2/lib/gcc-lib/sparc-sun-solaris2.6/2.95.2/libstdc++.so definition taken Undefined first referenced symbol in file __dl__Q2t12basic_string3ZcZt18string_char_traits1ZcZt24__default_alloc_template2b0i03RepPv ../../EzWindows/lib/libezwin.a(WindowManager.o) __eh_pc ../../EzWindows/lib/libezwin.a(WindowManager.o) clone__Q2t12basic_string3ZcZt18string_char_traits1ZcZt24__default_alloc_template2b0i03Rep ../../EzWindows/lib/libezwin.a(WindowManager.o) ld: fatal: Symbol referencing errors. No output written to lawn collect2: ld returned 1 exit status *** Error code 1 make: Fatal error: Command failed for target `lawn' Current working directory /export/samfs-bcf/rcf-11/bnash/sparc/chap03/lawn gmake: *** [default] Error 1 This particular run was built using g++ 2.95.2, but I've also tried with versions 3.3.2 and 4.2.1 with other equally strange errors. I'm pretty sure that EZWindows requires a 2.x version of gcc & g++. I've tried to make sure that my LD_LIBRARY_PATH and PATH are setup to include everything that's needed, but it seems that may be incorrect. I'm running out of ideas. Anyone have any other ones?

    Read the article

  • Unable to start basic application using sencha 2. Library files are not loaded

    - by Gendaful
    I am a new bee to sencha 2.I wanted to run a basic application using sencha touch but unable to load the application. Here is what I have done. I have downloaded the notesApp from miamicoder and i am trying to run the first chapter. I have attached the folder structure in the screenshot. Please have a look to understand the folder structure. Here is my index.html <!DOCTYPE html> <html> <head> <title>My Notes</title> <link href="sencha-touch.css" rel="stylesheet" type="text/css" /> <script src="sencha-touch-debug.js" type="text/javascript"></script> <script src="app.js" type="text/javascript"></script> </head> <body> </body> </html> I have downloaded sencha sdk 2.1 and took sencha-touch-debug.js and sencha-touch.css and placed in the root of the folder and referred from index.html as mentioned below. I used to to the same thing in sencha 1 and I was getting success but I am getting below error if I am trying to do the same with sencha 2. I am getting errors as below. Failed to load resource file:///path/NotesApp-Book-Code-Ch1/src/event/Dispatcher.js?_dc=1354982532236 Failed to load resource file:///path/NotesApp-Book-Code-Ch1/src/event/publisher/Dom.js?_dc=1354982532238 Uncaught Error: [Ext.Loader] Failed loading 'file:///path/ebook-building-a-sencha-touch-2-app%20(1)/NotesApp-Book-Code-Ch1/src/event/Dispatcher.js', please verify that the file exists sencha-touch-debug.js:8324 Uncaught Error: [Ext.Loader] Failed loading 'file:///path/ebook-building-a-sencha-touch-2-app%20(1)/NotesApp-Book-Code-Ch1/src/event/publisher/Dom.js', please verify that the file exists Is it necessary to use senchatools and generate folder structure? Simply copying the two lib files (sencha-touch-debug.js and sencha-touch.css) and refer them from index.html will not work with sencha 2? Please help. Thank you.

    Read the article

  • Simple database design and LINQ

    - by Anders Svensson
    I have very little experience designing databases, and now I want to create a very simple database that does the same thing I have previously had in xml. Here's the xml: <services> <service type="writing"> <small>125</small> <medium>100</medium> <large>60</large> <xlarge>30</xlarge> </service> <service type="analysis"> <small>56</small> <medium>104</medium> <large>200</large> <xlarge>250</xlarge> </service> </services> Now, I wanted to create the same thing in a SQL database, and started doing this ( hope this formats ok, but you'll get the gist, four columns and two rows): > ServiceType Small Medium Large > > Writing 125 100 60 > > Analysis 56 104 200 This didn't work too well, since I then wanted to use LINQ to select, say, the Large value for Writing (60). But I couldn't use LINQ for this (as far as I know) and use a variable for the size (see parameters in the method below). I could only do that if I had a column like "Size" where Small, Medium, and Large would be the values. But that doesn't feel right either, because then I would get several rows with ServiceType = Writing (3 in this case, one for each size), and the same for Analysis. And if I were to add more servicetypes I would have to do the same. Simply repetitive... Is there any smart way to do this using relationships or something? Using the second design above (although not good), I could use the following LINQ to select a value with parameters sent to the method: protected int GetHourRateDB(string serviceType, Size size) { CalculatorLinqDataContext context = new CalculatorLinqDataContext(); var data = (from calculatorData in context.CalculatorDatas where calculatorData.Service == serviceType && calculatorData.Size == size.ToString() select calculatorData).Single(); return data.Hours; } But if there is another better design, could you please also describe how to do the same selection using LINQ with that design? Please keep in mind that I am a rookie at database design, so please be as explicit and pedagogical as possible :-) Thanks! Anders

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >