Search Results

Search found 20785 results on 832 pages for 'idea'.

Page 392/832 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • USB 3.0 Not Working

    - by senzen
    I bought an Asus A43E laptop which comes with one USB 3.0 port. In Windows 7 it was working just fine, but after Windows 8 has been installed the USB doesnt work anymore. When I insert any flash drive windows has no reaction. I have been trying to find a driver but I couldnt find one that fits. I know this port works cause I am able to boot windows throw a flash drive. It might be some driver missing. You guys have any idea how can I solve this?

    Read the article

  • HP LaserJet 1020 doesn't print

    - by Rod
    I've got a HP LaserJet 1020 printer connected to my 64-bit Windows 7 Ultimate machine. I'm sharing this printer out. Got another 64-bit Windows 7 machine, with Home Premium. I installed the printer driver from HP for 64-bit on both machines. However, the strange thing is that if the Home Premium machine prints something, it never comes out unless I reboot the Ultimate machine. This is less than optimal. Any idea what's causing this and if there's anything I can do about it?

    Read the article

  • C programming multiple storage backends

    - by ahjmorton
    I am starting a side project in C which requires multiple storage backends to be driven by a particular piece of logic. These storage backends would each be linked with the decision of which one to use specified at runtime. So for example if I invoke my program with one set of parameters it will perform the operations in memory but if I change the program configuration it would write to disk. The underlying idea is that each storage backend should implement the same protocol. In other words the logic for performing operations should need to know which backend it is operating on. Currently the way I have thought to provide this indirection is to have a struct of function pointers with the logic calling these function pointers. Essentially the struct would contain all the operations needed to implement the higher level logic E.g. struct Context { void (* doPartOfDoOp)(void) int (* getResult)(void); } //logic.h void doOp(Context * context) { //bunch of stuff context->doPartOfDoOp(); } int getResult(Context * context) { //bunch of stuff return context->getResult(); } My questions is if this way of solving the problem is one a C programmer would understand? I am a Java developer by trade but enjoy using C/++. Essentially the Context struct provides an interface like level of indirection. However I would like to know if there is a more idiomatic way of achieving this.

    Read the article

  • CTRL-X not showing the bottom menu (compatible/nocompatible issue)

    - by simendsjo
    I'm having some strange behavior in vim. When I press C-X in insert-mode, I see ^X flashing quickly in the bottom right, but I don't get the menu at the bottom. The keybindings seems to work just find anyway: C-X C-L gives me line completion. I've managed to find out how to "fix" this, but it just doesn't seem right.. set compatible? echoes nocompatible. If I set it to compatible and then back to nocompatible, everything works. Trying the following at the end of my .vimrc doesn't help, and then I get some warnings from scripts. Any idea what's causing this and how I can fix this?

    Read the article

  • Hosting ESXI (free edition) [closed]

    - by Peter Adss
    We currently have one physical server running the free version of VMWare ESXi that virtualizes a Win SBS 2003 server and a Citrix server. We need to collocate the server and are investigating our options. Are there places that will host our virtual servers and save us the expense of shipping the physical server out for collocation. In my mind we'd copy the Vms to disk and ship them out. Does the fact that we're using the free version of ESXi create a barrier to this idea? Thanks for the help, I realize this is a stupid question.

    Read the article

  • sqlserver.exe uses 100% CPU

    - by Markus
    I've created an application (asp.net) that once a day syncs an entire database through XML-files. The sync first creates an transaction and then clears the databases tables and then starts to parse and insert the new rows into the database. When all the parsing is complete it commits the transaction. This works fine on a SQL Server 2005 (on another machine), but on SQL Server 2005 Express, the process starts to use 100% CPU after a while, and as I log the inserts being made I can see that it just stops inserting. No exception, it just stops inserting. Anyone got any idea what this may be? I've previously run the syncronization on another sql 2005 express (also on another computer), and that worked. The server has only 2GB RAM, could this be the problem?

    Read the article

  • data validation on wpf passwordbox:type password, re-type password

    - by black sensei
    Hello Experts !! I've built a wpf windows application in with there is a registration form. Using IDataErrorInfo i could successfully bind the field to a class property and with the help of styles display meaningful information about the error to users.About the submit button i use the MultiDataTrigger with conditions (Based on a post here on stackoverflow).All went well. Now i need to do the same for the passwordbox and apparently it's not as straight forward.I found on wpftutorial an article and gave it a try but for some reason it wasn't working. i've tried another one from functionalfun. And in this Functionalfun case the properties(databind,databound) are not recognized as dependencyproperty even after i've changed their name as suggested somewhere in the comments plus i don't have an idea whether it will work for a windows application, since it's designed for web. to give you an idea here is some code on textboxes <Window.Resources> <data:General x:Key="recharge" /> <Style x:Key="validButton" TargetType="{x:Type Button}" BasedOn="{StaticResource {x:Type Button}}" > <Setter Property="IsEnabled" Value="False"/> <Style.Triggers> <MultiDataTrigger> <MultiDataTrigger.Conditions> <Condition Binding="{Binding ElementName=txtRecharge, Path=(Validation.HasError)}" Value="false" /> </MultiDataTrigger.Conditions> <Setter Property="IsEnabled" Value="True" /> </MultiDataTrigger> </Style.Triggers> </Style> <Style x:Key="txtboxerrors" TargetType="{x:Type TextBox}" BasedOn="{StaticResource {x:Type TextBox}}"> <Style.Triggers> <Trigger Property="Validation.HasError" Value="true"> <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self}, Path=(Validation.Errors)[0].ErrorContent}"/> <Setter Property="Validation.ErrorTemplate"> <Setter.Value> <ControlTemplate> <DockPanel LastChildFill="True"> <TextBlock DockPanel.Dock="Bottom" FontSize="8" FontWeight="ExtraBold" Foreground="red" Padding="5 0 0 0" Text="{Binding ElementName=showerror, Path=AdornedElement.(Validation.Errors)[0].ErrorContent}"></TextBlock> <Border BorderBrush="Red" BorderThickness="2"> <AdornedElementPlaceholder Name="showerror" /> </Border> </DockPanel> </ControlTemplate> </Setter.Value> </Setter> </Trigger> </Style.Triggers> </Style> </Window.Resources> <TextBox Margin="12,69,12,70" Name="txtRecharge" Style="{StaticResource txtboxerrors}"> <TextBox.Text> <Binding Path="Field" Source="{StaticResource recharge}" ValidatesOnDataErrors="True" UpdateSourceTrigger="PropertyChanged"> <Binding.ValidationRules> <ExceptionValidationRule /> </Binding.ValidationRules> </Binding> </TextBox.Text> </TextBox> <Button Height="23" Margin="98,0,0,12" Name="btnRecharge" VerticalAlignment="Bottom" Click="btnRecharge_Click" HorizontalAlignment="Left" Width="75" Style="{StaticResource validButton}">Recharge</Button> some C# : class General : IDataErrorInfo { private string _field; public string this[string columnName] { get { string result = null; if(columnName == "Field") { if(Util.NullOrEmtpyCheck(this._field)) { result = "Field cannot be Empty"; } } return result; } } public string Error { get { return null; } } public string Field { get { return _field; } set { _field = value; } } } So what are suggestion you guys have for me? I mean how would you go about this? how do you do this since the databinding first purpose here is not to load data onto the fields they are just (for now) for data validation. thanks for reading this.

    Read the article

  • W3WP crashes when initializing a collection

    - by asbjornu
    I've created an ASP.NET MVC application that has an initializer attached to the PreApplicationStartMethodAttribute. When initializing, a collection is instantiated that implements an interface I've defined. When I instantiate this collection, w3wp.exe crashes with the following two incomprehensible entries in the event log: Faulting application name: w3wp.exe, version: 7.5.7600.16385, time stamp: 0x4a5bd0eb Faulting module name: clr.dll, version: 4.0.30319.1, time stamp: 0x4ba21eeb Exception code: 0xc00000fd Fault offset: 0x0000000000001177 Faulting process id: 0x1348 Faulting application start time: 0x01cb0224882f4723 Faulting application path: c:\windows\system32\inetsrv\w3wp.exe Faulting module path: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\clr.dll Report Id: c6a0941e-6e17-11df-864d-000acd16dcdb And: Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7600.16385 P3: 4a5bd0eb P4: clr.dll P5: 4.0.30319.1 P6: 4ba21eeb P7: c00000fd P8: 0000000000001177 P9: P10: Attached files: These files may be available here: Analysis symbol: Rechecking for solution: 0 Report Id: c6a0941e-6e17-11df-864d-000acd16dcdb Report Status: 0 If I remove the instantiation of the collection, the application starts normally. If I leave the instantiation, w3wp crashes. If I modify the interface, w3wp still crashes. I've tried every variation I could come up with on the theme of keeping the instantiation but doing everything else differently, but w3wp still crashes. My biggest issue here is that I have absolutely no idea why w3wp is crashing. It's not a StackOverflowException or anything concrete like that, all I get is the unintelligent junk cited above. I've tried to use DebugDiag and IISState to debug the w3wp process, but DebugDiag is only available for post-dump analysis in x64 (I'm running on Windows 7 x64, so the w3wp process is thus 64 bit) and IISStat says the following when I try to run it: D:\Programs\iisstate>IISState.exe -p 9204 -d Symbol search path is: SRV*D:\Programs\iisstate\symbols*http://msdl.microsoft.com/download/symbols IISState is limited to processes associated with IIS. If you require a generic debugger, please use WinDBG or CDB. They are available for download from http://www.microsoft.com/ddk/debugging. This error may also occur if a debugger is already attached to the process being checked. Incorrect Process Attachment I've double-checked 10 times that the process ID of my w3wp process is correct. I'm suspecting that IISState too only can debug x86 processes. Setting a breakpoint anywhere in the application does absolutely nothing. The break point isn't hit and w3wp crashes as soon as the request comes through to IIS from the browser. Starting the application with F5 in Visual Studio 2010 or starting another application to get the w3wp process up and running and then attaching the VS2010 debugger to it and then visiting the faulting application doesn't help. I've also tried to add an HTTP module as described in KB-911816 as well as add this to my web.config file: <configuration> <runtime> <legacyUnhandledExceptionPolicy enabled="true" /> </runtime> </configuration> Needless to say, it makes absolutely no difference. So I'm left with no way to debug the w3wp process, no way to extract any information from it and complete garbage dumped in my event log. If anybody has any idea on how to debug this problem, please let me know! Update My collection was initialized based on RouteTable.Routes which might have thrown an exception (perhaps by not being initialized itself yet in such an early stage of the ASP.NET lifecycle). Postponing the communication with RouteTable.Routes until a later stage solved the problem. While I don't really need an answer to this question anymore, I still find it so obscure that I'll leave it for anyone to comment on and answer, because I found no existing posts on this problem anywhere, so it might be of good reference in the future.

    Read the article

  • Need help with setting up comet code

    - by Saif Bechan
    Does anyone know off a way or maybe think its possible to connect Node.js with Nginx http push module to maintain a persistent connection between client and browser. I am new to comet so just don't understand the publishing etc maybe someone can help me with this. What i have set up so far is the following. I downloaded the jQuery.comet plugin and set up the following basic code: Client JavaScript <script type="text/javascript"> function updateFeed(data) { $('#time').text(data); } function catchAll(data, type) { console.log(data); console.log(type); } $.comet.connect('/broadcast/sub?channel=getIt'); $.comet.bind(updateFeed, 'feed'); $.comet.bind(catchAll); $('#kill-button').click(function() { $.comet.unbind(updateFeed, 'feed'); }); </script> What I can understand from this is that the client will keep on listening to the url followed by /broadcast/sub=getIt. When there is a message it will fire updateFeed. Pretty basic and understandable IMO. Nginx http push module config default_type application/octet-stream; sendfile on; keepalive_timeout 65; push_authorized_channels_only off; server { listen 80; location /broadcast { location = /broadcast/sub { set $push_channel_id $arg_channel; push_subscriber; push_subscriber_concurrency broadcast; push_channel_group broadcast; } location = /broadcast/pub { set $push_channel_id $arg_channel; push_publisher; push_min_message_buffer_length 5; push_max_message_buffer_length 20; push_message_timeout 5s; push_channel_group broadcast; } } } Ok now this tells nginx to listen at port 80 for any calls to /broadcast/sub and it will give back any responses sent to /broadcast/pub. Pretty basic also. This part is not so hard to understand, and is well documented over the internet. Most of the time there is a ruby or a php file behind this that does the broadcasting. My idea is to have node.js broadcasting /broadcast/pub. I think this will let me have persistent streaming data from the server to the client without breaking the connection. I tried the long-polling approach with looping the request but I think this will be more efficient. Or is this not going to work. Node.js file Now to create the Node.js i'm lost. First off all I don't know how to have node.js to work in this way. The setup I used for long polling is as follows: var sys = require('sys'), http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); res.write(new Date()); res.close(); seTimeout('',1000); }).listen(8000); This listens to port 8000 and just writes on the response variable. For long polling my nginx.config looked something like this: server { listen 80; server_name _; location / { proxy_pass http://mydomain.com:8080$request_uri; include /etc/nginx/proxy.conf; } } This just redirected the port 80 to 8000 and this worked fine. Does anyone have an idea on how to have Node.js act in a way Comet understands it. Would be really nice and you will help me out a lot. Recources used An example where this is done with ruby instead of Node.js jQuery.comet Nginx HTTP push module homepage Faye: a Comet client and server for Node.js and Rack To use faye I have to install the comet client, but I want to use the one supplied with Nginx. Thats why I don't just use faye. The one nginx uses is much more optimzed. extra Persistant connections Going evented with Node.js

    Read the article

  • Enable button based on TextBox value (WPF)

    - by zendar
    This is MVVM application. There is a window and related view model class. There is TextBox, Button and ListBox on form. Button is bound to DelegateCommand that has CanExecute function. Idea is that user enters some data in text box, presses button and data is appended to list box. I would like to enable command (and button) when user enters correct data in TextBox. Things work like this now: CanExecute() method contains code that checks if data in property bound to text box is correct. Text box is bound to property in view model UpdateSourceTrigger is set to PropertyChanged and property in view model is updated after each key user presses. Problem is that CanExecute() does not fire when user enters data in text box. It doesn't fire even when text box lose focus. How could I make this work? Edit: Re Yanko's comment: Delegate command is implemented in MVVM toolkit template and when you create new MVVM project, there is Delegate command in solution. As much as I saw in Prism videos this should be the same class (or at least very similar). Here is XAML snippet: ... <UserControl.Resources> <views:CommandReference x:Key="AddObjectCommandReference" Command="{Binding AddObjectCommand}" /> </UserControl.Resources> ... <TextBox Text="{Binding ObjectName, UpdateSourceTrigger=PropertyChanged}"> </TextBox> <Button Command="{StaticResource AddObjectCommandReference}">Add</Button> ... View model: // Property bound to textbox public string ObjectName { get { return objectName; } set { objectName = value; OnPropertyChanged("ObjectName"); } } // Command bound to button public ICommand AddObjectCommand { get { if (addObjectCommand == null) { addObjectCommand = new DelegateCommand(AddObject, CanAddObject); } return addObjectCommand; } } private void AddObject() { if (ObjectName == null || ObjectName.Length == 0) return; objectNames.AddSourceFile(ObjectName); OnPropertyChanged("ObjectNames"); // refresh listbox } private bool CanAddObject() { return ObjectName != null && ObjectName.Length > 0; } As I wrote in the first part of question, following things work: property setter for ObjectName is triggered on every keypress in textbox if I put return true; in CanAddObject(), command is active (button to) It looks to me that binding is correct. Thing that I don't know is how to make CanExecute() fire in setter of ObjectName property from above code. Re Ben's and Abe's answers: CanExecuteChanged() is event handler and compiler complains: The event 'System.Windows.Input.ICommand.CanExecuteChanged' can only appear on the left hand side of += or -= there are only two more members of ICommand: Execute() and CanExecute() Do you have some example that shows how can I make command call CanExecute(). I found command manager helper class in DelegateCommand.cs and I'll look into it, maybe there is some mechanism that could help. Anyway, idea that in order to activate command based on user input, one needs to "nudge" command object in property setter code looks clumsy. It will introduce dependencies and one of big points of MVVM is reducing them. Is it possible to solve this problem by using dependency properties?

    Read the article

  • On REST: WADL or not IDL, is the following approach right ?

    - by redben
    This question is a bit long, please bear with me. In REST, i think we should not need WADL or any IDL. But rather something that would implicitly cover its concept. The way I think about it is when we (humans) surf the Web, when we go to a web site for the first time, we don't know what services it provides. You discover those on the html home page (or a sitemap page in a help section) or maybe just the main menu on the home page. If you make an analogy, the homepage or site map to us humans is what WSDL is to WS-* or what WADL could be to a REST service. Only that its just like any other html content. I think that in REST the following is a good way to do things, respecting the HATEOS paradigm. Have a top level (or default) resource that lists links to your other resources. For a library example, say RestLibrary.com/ it could be something like: <root xmlns:lib="http://librarystandards.com/libraryml"> <resource class="lib:book"> <link type="application/vnd.libraryml+xml" template="mylib.com/book/{isbn}" /> <link type="application/vnd.libraryml+xml" rel="add" href="mylib.com/book" method="POST" /> <link type="application/vnd.libraryml+xml" rel="update" template="mylib.com/book/{isbn}" method="PUT" /> </resource> <resource class="lib:bookList"> <link template="mylib.com/book?keywords={keywords}" type="application/vnd.openlibrary+xml" rel="search" /> </resource> </root> Note that it is assumed that the media type "application/vnd.libraryml+xml" is a defined standard or (may be just proprietary vocabulary) named libraryml. Also, the client should be able to understand this "homepage" resource (elements root, resource and link). This is the part that could be used instead of WADL : an Abstract vocabulary that should be understandable by any client. You could use an existing standard like Atom for example. But the main idea is to have an abstract vocabulary understandable by any client. Why not WADL then ? well wadl is only for service discovery. The idea here is to have an light abstract vocabulary that would serve as a base for hypermedia. A "root" vocabulary. Like in owl we have owl:thing...etc Now if the client knows the "libraryml" standard it can follow the links to the things it understands (after parsing the media type properties and xmlns). If not, it just won't. When i can't understand how to deal with something in REST architecture i tend to see how we Humans do it in the Web. In the Web, we have the Generic language that is HTML that enables site builders to deliver any specific content, regardless of its meaning to the client (the user), Browsers understand HTML but not the "meaning" of its content. It is the user that understands the (domain specific) content. If i go to say QuantumPhysics.org, my browser can render the home page (it is just html after all) and i can read the home page. If i understand quantum then fine i can continue browsing. If i don't i just get out (unless i want to learn the hardway :) ) In the RetsLibrary.com example the client app is just like me+my browser on QuantumPhysics.org. the media type "application/vnd.libraryml+xml" is quantum physics (knowledge). http is http in both examples. Now HTML of QuantumPhysics.org is in RestLibrary.com is XML + that tiny little abstract vocabulary (root resource and link, that you could replace with something like Atom). So does this approach have any value ? don't we need a root tiny hyper-vocabulary so we can succeed with hypermedia and the "initial URI" concept ? edit Yeah why not RDF as the root vocabulary !

    Read the article

  • Using ASP.NET MVC 2 with Ninject 2 from scratch

    - by Rune Jacobsen
    I just did File - New Project last night on a new project. Ah, the smell of green fields. I am using the just released ASP.NET MVC 2 (i.e. no preview or release candidate, the real thing), and thought I'd get off to a good start using Ninject 2 (also released version) with the MVC extensions. I downloaded the MVC extensions project, opened it in VS2008Sp1, built it in release mode, and then went into the mvc2\build\release folder and copied Ninject.dll and Ninject.Web.Mvc.dll from there to the Libraries folder on my project (so that I can lug them around in source control and always have the right version everywhere). I didn't include the corresponding .xml files - should I? Do they just provide intellisense, or some other function? Not a big deal I believe. Anyhoo, I followed the most up-to-date advice I could find; I referenced the DLLs in my MVC2 project, then went to work on Global.asax.cs. First I made it inherit from NinjectHttpApplication. I removed the Application_Start() method, and overrode OnApplicationStarted() instead. Here is that method: protected override void OnApplicationStarted() { base.OnApplicationStarted(); AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); // RegisterAllControllersIn(Assembly.GetExecutingAssembly()); } And I also followed the advice of VS and implemented the CreateKernel method: protected override Ninject.IKernel CreateKernel() { // RegisterAllControllersIn(Assembly.GetExecutingAssembly()); return new StandardKernel(); } That is all. No other modifications to the project. You'll notice that the RegisterAllControllersIn() method is commented out in two places above. I've figured I can run it in three different combinations, all with their funky side effects; Running it like above. I am then presented with the standard "Welcome to ASP.NET MVC" page in all its' glory. However, after this page is displayed correctly in the browser, VS shows me an exception that was thrown. It throws in NinjectControllerFactory.GetControllerInstance(), which was called with a NULL value in the controllerType parameter. Notice that this happens after the /Home page is rendered - I have no idea why it is called again, and by using breakpoints I've already determined that GetControllerInstance() has been successfully called for the HomeController. Why this new call with controllerType as null? I really have no idea. Pressing F5 at this time takes me back to the browser, no complaints there. Uncommenting the RegisterAllControllersIn() method in CreateKernel() This is where stuff is really starting to get funky. Now I get a 404 error. Some times I have also gotten an ArgumentNullException on the RegisterAllControllersIn() line, but that is pretty rare, and I have not been able to reproduce it. Uncommenting the RegisterAllControllers() method in OnApplicationStarted() (And putting the comment back on the one in CreateKernel()) Results in behavior that seems exactly like that in point 1. So to keep from going on forever - is there an exact step-by-step guide on how to set up an MVC 2 project with Ninject 2 (both non-beta release versions) to get the controllers provided by Ninject? Of course I will then start providing some actual stuff for injection (like ISession objects and repositories, loggers etc), but I thought I'd get this working first. Any help will be highly appreciated! (Also posted to the Ninject Google Group)

    Read the article

  • Emulating old-school sprite flickering (theory and concept)

    - by Jeffrey Kern
    I'm trying to develop an oldschool NES-style video game, with sprite flickering and graphical slowdown. I've been thinking of what type of logic I should use to enable such effects. I have to consider the following restrictions if I want to go old-school NES style: No more than 64 sprites on the screen at a time No more than 8 sprites per scanline, or for each line on the Y axis If there is too much action going on the screen, the system freezes the image for a frame to let the processor catch up with the action From what I've read up, if there were more than 64 sprites on the screen, the developer would only draw high-priority sprites while ignoring low-priority ones. They could also alternate, drawing each even numbered sprite on opposite frames from odd numbered ones. The scanline issue is interesting. From my testing, it is impossible to get good speed on the XBOX 360 XNA framework by drawing sprites pixel-by-pixel, like the NES did. This is why in old-school games, if there were too many sprites on a single line, some would appear if they were cut in half. For all purposes for this project, I'm making scanlines be 8 pixels tall, and grouping the sprites together per scanline by their Y positioning. So, dumbed down I need to come up with a solution that.... 64 sprites on screen at once 8 sprites per 'scanline' Can draw sprites based on priority Can alternate between sprites per frame Emulate slowdown Here is my current theory First and foremost, a fundamental idea I came up with is addressing sprite priority. Assuming values between 0-255 (0 being low), I can assign sprites priority levels, for instance: 0 to 63 being low 63 to 127 being medium 128 to 191 being high 192 to 255 being maximum Within my data files, I can assign each sprite to be a certain priority. When the parent object is created, the sprite would randomly get assigned a number between its designated range. I would then draw sprites in order from high to low, with the end goal of drawing every sprite. Now, when a sprite gets drawn in a frame, I would then randomly generate it a new priority value within its initial priority level. However, if a sprite doesn't get drawn in a frame, I could add 32 to its current priority. For example, if the system can only draw sprites down to a priority level of 135, a sprite with an initial priority of 45 could then be drawn after 3 frames of not being drawn (45+32+32+32=141) This would, in theory, allow sprites to alternate frames, allow priority levels, and limit sprites to 64 per screen. Now, the interesting question is how do I limit sprites to only 8 per scanline? I'm thinking that if I'm sorting the sprites high-priority to low-priority, iterate through the loop until I've hit 64 sprites drawn. However, I shouldn't just take the first 64 sprites in the list. Before drawing each sprite, I could check to see how many sprites were drawn in it's respective scanline via counter variables . For example: Y-values between 0 to 7 belong to Scanline 0, scanlineCount[0] = 0 Y-values between 8 to 15 belong to Scanline 1, scanlineCount[1] = 0 etc. I could reset the values per scanline for every frame drawn. While going down the sprite list, add 1 to the scanline's respective counter if a sprite gets drawn in that scanline. If it equals 8, don't draw that sprite and go to the sprite with the next lowest priority. SLOWDOWN The last thing I need to do is emulate slowdown. My initial idea was that if I'm drawing 64 sprites per frame and there's still more sprites that need to be drawn, I could pause the rendering by 16ms or so. However, in the NES games I've played, sometimes there's slowdown if there's not any sprite flickering going on whereas the game moves beautifully even if there is some sprite flickering. Perhaps give a value to each object that uses sprites on the screen (like the priority values above), and if the combined values of all objects w/ sprites surpass a threshold, introduce the sprite flickering? IN CONCLUSION... Does everything I wrote actually sound legitimate and could work, or is it a pipe dream? What improvements can you all possibly think with this game programming theory of mine?

    Read the article

  • Portion from CGPDFPage + Scale (zoom)

    - by malcom
    I wanna take a rect from CGPDFPage (the portion of image around the user's touch point(x,y)) and scale it by a scaleFactor (ie 2x). Below the code I've used to get CGPDFPage's rect. The problem with it is the scaleFactor support. The idea is: 1) pageRect size is pageRect.size *2 2) myThumbRect (the region to zoom) become resultImageSize/scaleFactor (because the final output will be scaleFactor times bigger) 3) pointOfClick (x,y) become pointOfClick(2x,2y) 4) scale up the context by factor CGContextScaleCTM(ctx, scaleFactor, -scaleFactor); 5) grab the rect However the result is an empty image. Any idea? -(UIImage *) zoomedPDFImageAtPoint:(CGPoint) pointOfClick size:(CGSize) resultImageSize scale:(CGFloat) scaleFactor { // get the rect of our page CGRect pageRect = CGPDFPageGetBoxRect(myPageRef, kCGPDFCropBox); // my thumb rect is a portion of our CGPDFPage with size as /scaleFactor of resultImageSize // then we need to scale the image portiong by *scaleFactor and draw it in our resultImageSize sized graphic context CGSize myThumbRect = resultImageSize; // page rect has size as original size * scaleFactor //resultImageSize = pageRect.size; // to remove, i've used it to see where the rect is printed in final image pointOfClick = CGPointMake(-pointOfClick.x, -pointOfClick.y); NSLog(@"Click (%0.f,%0.f) Page (%0.f,%0.f ; %0.f,%0.f)",pointOfClick.x,pointOfClick.y,pageRect.origin.x,pageRect.origin.y,pageRect.size.width,pageRect.size.height); // create a new context for resulting image of my desidered size UIGraphicsBeginImageContext(resultImageSize); CGContextRef ctx = UIGraphicsGetCurrentContext(); CGContextSaveGState(ctx); // because rect is that for drawing in a flipped coordinate system, this translate the lower-left corner of the rect // in an upright coordinate system CGContextTranslateCTM(ctx, CGRectGetMinX(pageRect),CGRectGetMaxY(pageRect)); // scale to flip the coordinate system so that the y axis goes up the drawing canvas CGContextScaleCTM(ctx, 1, -1); // translate so the origin is offset by exactly the rect origin CGContextTranslateCTM(ctx, -(pageRect.origin.x), -(pageRect.origin.y)); // zoomRect is interested region.the clickPoint is the center of this region CGRect zoomedRect = CGRectMake(-pointOfClick.x, (pageRect.size.height-(-pointOfClick.y)),myThumbRect.width,myThumbRect.height); zoomedRect.origin.y-=(myThumbRect.height/2.0); zoomedRect.origin.x-=(myThumbRect.width/2.0); NSLog(@"Zoom region at (%0.f,%0.f) (%0.f,%0.f)",zoomedRect.origin.x,zoomedRect.origin.y,zoomedRect.size.width,zoomedRect.size.height); // now we need to move clipped rect to the origin // x: x was moved subtracting current click x coordinate and adding the half of zoomed rect (because zoomedRect contains pointsOfClick at it's center) // same with y but inverse (because ctm is flipped) CGPoint translateToOrigin = CGPointMake(pointOfClick.x+(zoomedRect.size.width/2.0), -pointOfClick.y-(zoomedRect.size.height/2.0));//(pageRect.size.height-zoomedRect.size.height)+pointOfClick.y); NSLog(@"Translate zoomed region to origin using translate by (%0.f,%0.f)",translateToOrigin.x,translateToOrigin.y); CGContextTranslateCTM(ctx, translateToOrigin.x,translateToOrigin.y); CGContextClipToRect (ctx, zoomedRect); // now draw the document CGContextDrawPDFPage(ctx, myPageRef); CGContextRestoreGState(ctx); // generate image UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return finalImage; }

    Read the article

  • Google App Engine - Secure Cookies

    - by tponthieux
    I'd been searching for a way to do cookie based authentication/sessions in Google App Engine because I don't like the idea of memcache based sessions, and I also don't like the idea of forcing users to create google accounts just to use a website. I stumbled across someone's posting that mentioned some signed cookie functions from the Tornado framework and it looks like what I need. What I have in mind is storing a user's id in a tamper proof cookie, and maybe using a decorator for the request handlers to test the authentication status of the user, and as a side benefit the user id will be available to the request handler for datastore work and such. The concept would be similar to forms authentication in ASP.NET. This code comes from the web.py module of the Tornado framework. According to the docstrings, it "Signs and timestamps a cookie so it cannot be forged" and "Returns the given signed cookie if it validates, or None." I've tried to use it in an App Engine Project, but I don't understand the nuances of trying to get these methods to work in the context of the request handler. Can someone show me the right way to do this without losing the functionality that the FriendFeed developers put into it? The set_secure_cookie, and get_secure_cookie portions are the most important, but it would be nice to be able to use the other methods as well. #!/usr/bin/env python import Cookie import base64 import time import hashlib import hmac import datetime import re import calendar import email.utils import logging def _utf8(s): if isinstance(s, unicode): return s.encode("utf-8") assert isinstance(s, str) return s def _unicode(s): if isinstance(s, str): try: return s.decode("utf-8") except UnicodeDecodeError: raise HTTPError(400, "Non-utf8 argument") assert isinstance(s, unicode) return s def _time_independent_equals(a, b): if len(a) != len(b): return False result = 0 for x, y in zip(a, b): result |= ord(x) ^ ord(y) return result == 0 def cookies(self): """A dictionary of Cookie.Morsel objects.""" if not hasattr(self,"_cookies"): self._cookies = Cookie.BaseCookie() if "Cookie" in self.request.headers: try: self._cookies.load(self.request.headers["Cookie"]) except: self.clear_all_cookies() return self._cookies def _cookie_signature(self,*parts): self.require_setting("cookie_secret","secure cookies") hash = hmac.new(self.application.settings["cookie_secret"], digestmod=hashlib.sha1) for part in parts:hash.update(part) return hash.hexdigest() def get_cookie(self,name,default=None): """Gets the value of the cookie with the given name,else default.""" if name in self.cookies: return self.cookies[name].value return default def set_cookie(self,name,value,domain=None,expires=None,path="/", expires_days=None): """Sets the given cookie name/value with the given options.""" name = _utf8(name) value = _utf8(value) if re.search(r"[\x00-\x20]",name + value): # Don't let us accidentally inject bad stuff raise ValueError("Invalid cookie %r:%r" % (name,value)) if not hasattr(self,"_new_cookies"): self._new_cookies = [] new_cookie = Cookie.BaseCookie() self._new_cookies.append(new_cookie) new_cookie[name] = value if domain: new_cookie[name]["domain"] = domain if expires_days is not None and not expires: expires = datetime.datetime.utcnow() + datetime.timedelta( days=expires_days) if expires: timestamp = calendar.timegm(expires.utctimetuple()) new_cookie[name]["expires"] = email.utils.formatdate( timestamp,localtime=False,usegmt=True) if path: new_cookie[name]["path"] = path def clear_cookie(self,name,path="/",domain=None): """Deletes the cookie with the given name.""" expires = datetime.datetime.utcnow() - datetime.timedelta(days=365) self.set_cookie(name,value="",path=path,expires=expires, domain=domain) def clear_all_cookies(self): """Deletes all the cookies the user sent with this request.""" for name in self.cookies.iterkeys(): self.clear_cookie(name) def set_secure_cookie(self,name,value,expires_days=30,**kwargs): """Signs and timestamps a cookie so it cannot be forged""" timestamp = str(int(time.time())) value = base64.b64encode(value) signature = self._cookie_signature(name,value,timestamp) value = "|".join([value,timestamp,signature]) self.set_cookie(name,value,expires_days=expires_days,**kwargs) def get_secure_cookie(self,name,include_name=True,value=None): """Returns the given signed cookie if it validates,or None""" if value is None:value = self.get_cookie(name) if not value:return None parts = value.split("|") if len(parts) != 3:return None if include_name: signature = self._cookie_signature(name,parts[0],parts[1]) else: signature = self._cookie_signature(parts[0],parts[1]) if not _time_independent_equals(parts[2],signature): logging.warning("Invalid cookie signature %r",value) return None timestamp = int(parts[1]) if timestamp < time.time() - 31 * 86400: logging.warning("Expired cookie %r",value) return None try: return base64.b64decode(parts[0]) except: return None uid=1234|1234567890|d32b9e9c67274fa062e2599fd659cc14 Parts: 1. uid is the name of the key 2. 1234 is your value in clear 3. 1234567890 is the timestamp 4. d32b9e9c67274fa062e2599fd659cc14 is the signature made from the value and the timestamp

    Read the article

  • Is the Cloud ready for an Enterprise Java web application? Seeking a JEE hosting advice.

    - by Jakub Holý
    Greetings to all the smart people around here! I'd like to ask whether it is feasible or a good idea at all to deploy a Java enterprise web application to a Cloud such as Amazon EC2. More exactly, I'm looking for infrastructure options for an application that shall handle few hundred users with long but neither CPU nor memory intensive sessions. I'm considering dedicated servers, virtual private servers (VPSs) and EC2. I've noticed that there is a project called JBoss Cloud so people are working on enabling such a deployment, on the other hand it doesn't seem to be mature yet and I'm not sure that the cloud is ready for this kind of applications, which differs from the typical cloud-based applications like Twitter. Would you recommend to deploy it to the cloud? What are the pros and cons? The application is a Java EE 5 web application whose main function is to enable users to compose their own customized Product by combining the available Parts. It uses stateless and stateful session beans and JPA for persistence of entities to a RDBMS and fetches information about Parts from the company's inventory system via a web service. Aside of external users it's used also by few internal ones, who are authenticated against the company's LDAP. The application should handle around 300-400 concurrent users building their product and should be reasonably scalable and available though these qualities are only of a medium importance at this stage. I've proposed an architecture consisting of a firewall (FW) and load balancer supporting sticky sessions and https (in the Cloud this would be replaced with EC2's Elastic Load Balancing service and FW on the app. servers, in a physical architecture the load-balancer would be a HW), then two physical clustered application servers combined with web servers (so that if one fails, a user doesn't loose his/her long built product) and finally a database server. The DB server would need a slave backup instance that can replace the master instance if it fails. This should provide reasonable availability and fault tolerance and provide good scalability as long as a single RDBMS can keep with the load, which should be OK for quite a while because most of the operations are done in the memory using a stateful bean and only occasionally stored or retrieved from the DB and the amount of data is low too. A problematic part could be the dependency on the remote inventory system webservice but with good caching of its outputs in the application it should be OK too. Unfortunately I've only vague idea of the system resources (memory size, number and speed of CPUs/cores) that such an "average Java EE application" for few hundred users needs. My rough and mostly unfounded estimate based on actual Amazon offerings is that 1.7GB and a single, 2-core "modern CPU" with speed around 2.5GHz (the High-CPU Medium Instance) should be sufficient for any of the two application servers (since we can handle higher load by provisioning more of them). Alternatively I would consider using the Large instance (64b, 7.5GB RAM, 2 cores at 1GHz) So my question is whether such a deployment to the cloud is technically and financially feasible or whether dedicated/VPS servers would be a better option and whether there are some real-world experiences with something similar. Thank you very much! /Jakub Holy PS: I've found the JBoss EAP in a Cloud Case Study that shows that it is possible to deploy a real-world Java EE application to the EC2 cloud but unfortunately there're no details regarding topology, instance types, or anything :-(

    Read the article

  • Database - Designing an "Events" Table

    - by Alix Axel
    After reading the tips from this great Nettuts+ article I've come up with a table schema that would separate highly volatile data from other tables subjected to heavy reads and at the same time lower the number of tables needed in the whole database schema, however I'm not sure if this is a good idea since it doesn't follow the rules of normalization and I would like to hear your advice, here is the general idea: I've four types of users modeled in a Class Table Inheritance structure, in the main "user" table I store data common to all the users (id, username, password, several flags, ...) along with some TIMESTAMP fields (date_created, date_updated, date_activated, date_lastLogin, ...). To quote the tip #16 from the Nettuts+ article mentioned above: Example 2: You have a “last_login” field in your table. It updates every time a user logs in to the website. But every update on a table causes the query cache for that table to be flushed. You can put that field into another table to keep updates to your users table to a minimum. Now it gets even trickier, I need to keep track of some user statistics like how many unique times a user profile was seen how many unique times a ad from a specific type of user was clicked how many unique times a post from a specific type of user was seen and so on... In my fully normalized database this adds up to about 8 to 10 additional tables, it's not a lot but I would like to keep things simple if I could, so I've come up with the following "events" table: |------|----------------|----------------|--------------|-----------| | ID | TABLE | EVENT | DATE | IP | |------|----------------|----------------|--------------|-----------| | 1 | user | login | 201004190030 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 1 | user | login | 201004190230 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | created | 201004190031 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | activated | 201004190234 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | approved | 201004190930 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | login | 201004191200 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | created | 201004191230 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | impressed | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | blocked | 201004200319 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | deleted | 201004200320 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| Basically the ID refers to the primary key (id) field in the TABLE table, I believe the rest should be pretty straightforward. One thing that I've come to like in this design is that I can keep track of all the user logins instead of just the last one, and thus generate some interesting metrics with that data. Due to the nature of the events table I also thought of making some optimizations, such as: #9: Since there is only a finite number of tables and a finite (and predetermined) number of events, the TABLE and EVENTS columns could be setup as ENUMs instead of VARCHARs to save some space. #14: Store IPs as UNSIGNED INT with INET_ATON() instead of VARCHARs. Store DATEs as TIMESTAMPs instead of DATETIMEs. Use the ARCHIVE (or the CSV?) engine instead of InnoDB / MyISAM. Overall, each event would only consume 14 bytes which is okay for my traffic I guess. Pros: Ability to store more detailed data (such as logins). No need to design (and code for) almost a dozen additional tables (dates and statistics). Reduces a few columns per table and keeps volatile data separated. Cons: Non-relational (still not as bad as EAV): SELECT * FROM events WHERE id = 2 AND table = 'user' ORDER BY date DESC(); 6 bytes overhead per event (ID, TABLE and EVENT). I'm more inclined to go with this approach since the pros seem to far outweigh the cons, but I'm still a little bit reluctant.. Am I missing something? What are your thoughts on this? Thanks!

    Read the article

  • Temporary "Backup" of SharePoint Content During Feature and Solution Deployment

    - by ccomet
    I need to decide on a method for storing a subset of the content in a SharePoint site, so that when I delete and recreate certain lists as part of a feature activation, I can re-insert all of this content back where it should belong. I have an idea myself, but I don't know if it's the only method and more importantly, the right method. My client has me creating a SharePoint system for them to communicate with their clients. The business process has maybe 5 stages in it (maybe it's more, I don't even know because they don't tell me everything), and the current system I've written over the past months is maybe 2 stages through. This meets our deadline of completing those systems by Monday next week... but at that point my client is planning on making the site live from that point. In effect, their work with their clients will be running parallel with my work for them. As I complete my own work on a separate test server, I'll push each following stage of the process onto the live server. Scheduled downtimes during non-business times (like a weekend) will be available for me to perform these pushes. Keeping pace so that my development is faster than the actual business process is my own problem and off-topic... so let's get back to the problem I stated at the start of this post. In this system, we have sets of features which will create lists for their associated content types and field types when activated, and delete these lists when the feature is deactivated. Most updates don't need to deactivate and reactivate these features, such as workflow changes, custom actions, custom forms, and similar ilk. But there are some parts which do require this. On my test server, it's okay for me to obliterate lists, but once the site is live and there's real correspondence data, it's absolutely unacceptable to do this. So when I need to implement a new change in functionality, I need to be able to store the currently present data in several lists, deactivate the feature, reactivate the feature, and restore all of this data. Perhaps I have hoist myself by my own petard with the feature system I implemented. Unfortunately, the necessity to later on make several of these "project sites" meant I had to do a lot of my code with the concept of "Can be deployed repeatedly" in mind. My current plan is to run through lists and libraries which will be affected by the particular feature that is to be reset. Files and all of their versions will be saved in a directory on the server. Then, a set of text files will be used to store all of the important field values for the items. This includes a lot of cross-list reference lookups that will need to be maintained, but that's simple enough. Then, I deactivate the feature, deploy the new solution, and reactivate the feature. We upload all of the files in the order specified by their versions and update them with the stored fields for those versions, so that we retain the version structure. As each one is first uploaded, the new ID is picked out, and all relevant lookups in the rest of the files are updated (in some manner that I make sure I don't re-update it later with an incorrect value, of course). After that, we run through all the rest of the items in the order most conducive to keeping the relational data correct. This roughly summarizes what my current plan is. To my advantage, there are no long running workflows in the system that will be affected by this, so there's nothing I will have to worry about making sure nothing is "still running" when I do this stuff. I don't really know all the cons of this approach... I can imagine they're quite hefty. But I'm unsure what other choices I even have, and my searches haven't turned up anything. Is there anyone who can think of a better idea? Or will anyone just tell me that I really have no other choice? Thanks in advance!

    Read the article

  • How to generate a Program template by generating an abstract class

    - by Byron-Lim Timothy Steffan
    i have the following problem. The 1st step is to implement a program, which follows a specific protocol on startup. Therefore, functions as onInit, onConfigRequest, etc. will be necessary. (These are triggered e.g. by incoming message on a TCP Port) My goal is to generate a class for example abstract one, which has abstract functions as onInit(), etc. A programmer should just inherit from this base class and should merely override these abstract functions of the base class. The rest as of the protocol e.g. should be simply handled in the background (using the code of the base class) and should not need to appear in the programmers code. What is the correct design strategy for such tasks? and how do I deal with, that the static main method is not inheritable? What are the key-tags for this problem? (I have problem searching for a solution since I lack clear statements on this problem) Goal is to create some sort of library/class, which - included in ones code - results in executables following the protocol. EDIT (new explanation): Okay let me try to explain more detailled: In this case programs should be clients within a client server architecture. We have a client server connection via TCP/IP. Each program needs to follow a specific protocol upon program start: As soon as my program starts and gets connected to the server it will receive an Init Message (TcpClient), when this happens it should trigger the function onInit(). (Should this be implemented by an event system?) After onInit() a acknowledgement message should be sent to the server. Afterwards there are some other steps as e.g. a config message from the server which triggers an onConfig and so on. Let's concentrate on the onInit function. The idea is, that onInit (and onConfig and so on) should be the only functions the programmer should edit while the overall protocol messaging is hidden for him. Therefore, I thought using an abstract class with the abstract methods onInit(), onConfig() in it should be the right thing. The static Main class I would like to hide, since within it e.g. there will be some part which connects to the tcp port, which reacts on the Init Message and which will call the onInit function. 2 problems here: 1. the static main class cant be inherited, isn it? 2. I cannot call abstract functions from the main class in the abstract master class. Let me give an Pseudo-example for my ideas: public abstract class MasterClass { static void Main(string[] args){ 1. open TCP connection 2. waiting for Init Message from server 3. onInit(); 4. Send Acknowledgement, that Init Routine has ended successfully 5. waiting for Config message from server 6..... } public abstract void onInit(); public abstract void onConfig(); } I hope you get the idea now! The programmer should afterwards inherit from this masterclass and merely need to edit the functions onInit and so on. Is this way possible? How? What else do you recommend for solving this? EDIT: The strategy ideo provided below is a good one! Check out my comment on that.

    Read the article

  • Is it possible to embed Cockburn style textual UML Use Case content in the code base to improve code

    - by fooledbyprimes
    experimenting with Cockburn use cases in code I was writing some complicated UI code. I decided to employ Cockburn use cases with fish,kite,and sea levels (discussed by Martin Fowler in his book 'UML Distilled'). I wrapped Cockburn use cases in static C# objects so that I could test logical conditions against static constants which represented steps in a UI workflow. The idea was that you could read the code and know what it was doing because the wrapped objects and their public contants gave you ENGLISH use cases via namespaces. Also, I was going to use reflection to pump out error messages that included the described use cases. The idea is that the stack trace could include some UI use case steps IN ENGLISH.... It turned out to be a fun way to achieve a mini,psuedo light-weight Domain Language but without having to write a DSL compiler. So my question is whether or not this is a good way to do this? Has anyone out there ever done something similar? c# example snippets follow Assume we have some aspx page which has 3 user controls (with lots of clickable stuff). User must click on stuff in one particular user control (possibly making some kind of selection) and then the UI must visually cue the user that the selection was successful. Now, while that item is selected, the user must browse through a gridview to find an item within one of the other user controls and then select something. This sounds like an easy thing to manage but the code can get ugly. In my case, the user controls all sent event messages which were captured by the main page. This way, the page acted like a central processor of UI events and could keep track of what happens when the user is clicking around. So, in the main aspx page, we capture the first user control's event. using MyCompany.MyApp.Web.UseCases; protected void MyFirstUserControl_SomeUIWorkflowRequestCommingIn(object sender, EventArgs e) { // some code here to respond and make "state" changes or whatever // // blah blah blah // finally we have this (how did we know to call fish level method?? because we knew when we wrote the code to send the event in the user control) UpdateUserInterfaceOnFishLevelUseCaseGoalSuccess(FishLevel.SomeNamedUIWorkflow.SelectedItemForPurchase) } protected void UpdateUserInterfaceOnFishLevelGoalSuccess(FishLevel.SomeNamedUIWorkflow goal) { switch (goal) { case FishLevel.SomeNamedUIWorkflow.NewMasterItemSelected: //call some UI related methods here including methods for the other user controls if necessary.... break; case FishLevel.SomeNamedUIWorkFlow.DrillDownOnDetails: //call some UI related methods here including methods for the other user controls if necessary.... break; case FishLevel.SomeNamedUIWorkFlow.CancelMultiSelect: //call some UI related methods here including methods for the other user controls if necessary.... break; // more cases... } } } //also we have protected void UpdateUserInterfaceOnSeaLevelGoalSuccess(SeaLevel.SomeNamedUIWorkflow goal) { switch (goal) { case SeaLevel.CheckOutWorkflow.ChangedCreditCard: // do stuff // more cases... } } } So, in the MyCompany.MyApp.Web.UseCases namespace we might have code like this: class SeaLevel... class FishLevel... class KiteLevel... The workflow use cases embedded in the classes could be inner classes or static methods or enumerations or whatever gives you the cleanest namespace. I can't remember what I did originally but you get the picture.

    Read the article

  • How can I turn a SimpleXML object to array, then shuffle?

    - by Joshua Cody
    Crux of my problem: I've got an XML file that returns 20 results. Within these results are all the elements I need to get. Now, I need to return them in a random order, and be able to specifically work with item 1, items 2-5, and items 6-17. Idea 1: Use this script to convert the object to an array, which I can shuffle through. This is close to working, but a few of the elements I need to get are under a different namespace, and I don't seem to be able to get them. Code: /* * Convert a SimpleXML object into an array (last resort). * * @access public * @param object $xml * @param boolean $root - Should we append the root node into the array * @return array */ function xmlToArray($xml, $root = true) { if (!$xml->children()) { return (string)$xml; } $array = array(); foreach ($xml->children() as $element => $node) { $totalElement = count($xml->{$element}); if (!isset($array[$element])) { $array[$element] = ""; } // Has attributes if ($attributes = $node->attributes()) { $data = array( 'attributes' => array(), 'value' => (count($node) > 0) ? xmlToArray($node, false) : (string)$node // 'value' => (string)$node (old code) ); foreach ($attributes as $attr => $value) { $data['attributes'][$attr] = (string)$value; } if ($totalElement > 1) { $array[$element][] = $data; } else { $array[$element] = $data; } // Just a value } else { if ($totalElement > 1) { $array[$element][] = xmlToArray($node, false); } else { $array[$element] = xmlToArray($node, false); } } } if ($root) { return array($xml->getName() => $array); } else { return $array; } } $thumbfeed = simplexml_load_file('http://gdata.youtube.com/feeds/api/videos?q=skadaddlemedia&max-results=20&orderby=published&prettyprint=true'); $xmlToArray = xmlToArray($thumbfeed); $thumbArray = $xmlToArray["feed"]; for($n = 0; $n < 18; $n++){ $title = $thumbArray["entry"][$n]["title"]["value"]; $desc = $thumbArray["entry"][0]["content"]["value"]; $videoUrl = $differentNamespace; $thumbUrl = $differentNamespace; } Idea 2: Continue using my working code that is getting the information using a foreach, but store each element in an array, then use shuffle on that. I'm not precisely sure hwo to write to an array within a foreach loop and not write over one another, though. Working code: foreach($thumbfeed->entry as $entry){ $thumbmedia = $entry->children('http://search.yahoo.com/mrss/') ->group ; $thumb = $thumbmedia->thumbnail[0]->attributes()->url; $thumburl = $thumbmedia->content[0]->attributes()->url; $thumburl1 = explode("http://www.youtube.com/v/", $thumburl[0]); $thumbid = explode("?f=videos&app=youtube_gdata", $thumburl1[1]); $thumbtitle = $thumbmedia->title; $thumbyt = $thumbmedia->children('http://gdata.youtube.com/schemas/2007') ->duration ; $thumblength = $thumbyt->attributes()->seconds; } Ideas on if either of these are good solutions to my problem, and if so, how I can get over my execution humps? Thanks so much for any help you can give.

    Read the article

  • Unknown error sourcing a script containing 'typeset -r' wrapped in command substitution

    - by Bernard Assaf
    I wish to source a script, print the value of a variable this script defines, and then have this value be assigned to a variable on the command line with command substitution wrapping the source/print commands. This works on ksh88 but not on ksh93 and I am wondering why. $ cat typeset_err.ksh #!/bin/ksh unset _typeset_var typeset -i -r _typeset_var=1 DIR=init # this is the variable I want to print When run on ksh88 (in this case, an AIX 6.1 box), the output is as follows: $ A=$(. ./typeset_err.ksh; print $DIR) $ echo $A init When run on ksh93 (in this case, a Linux machine), the output is as follows: $ A=$(. ./typeset_err.ksh; print $DIR) -ksh: _typeset_var: is read only $ print $A ($A is undefined) The above is just an example script. The actual thing I wish to accomplish is to source a script that sets values to many variables, so that I can print just one of its values, e.g. $DIR, and have $A equal that value. I do not know in advance the value of $DIR, but I need to copy files to $DIR during execution of a different batch script. Therefore the idea I had was to source the script in order to define its variables, print the one I wanted, then have that print's output be assigned to another variable via $(...) syntax. Admittedly a bit of a hack, but I don't want to source the entire sub-script in the batch script's environment because I only need one of its variables. The typeset -r code in the beginning is the error. The script I'm sourcing contains this in order to provide a semaphore of sorts--to prevent the script from being sourced more than once in the environment. (There is an if statement in the real script that checks for _typeset_var = 1, and exits if it is already set.) So I know I can take this out and get $DIR to print fine, but the constraints of the problem include keeping the typeset -i -r. In the example script I put an unset in first, to ensure _typeset_var isn't already defined. By the way I do know that it is not possible to unset a typeset -r variable, according to ksh93's man page for ksh. There are ways to code around this error. The favorite now is to not use typeset, but just set the semaphore without typeset (e.g. _typeset_var=1), but the error with the code as-is remains as a curiosity to me, and I want to see if anyone can explain why this is happening. By the way, another idea I abandoned was to grep the variable I need out of its containing script, then print that one variable for $A to be set to; however, the variable ($DIR in the example above) might be set to another variable's value (e.g. DIR=$dom/init), and that other variable might be defined earlier in the script; therefore, I need to source the entire script to make sure I all variables are defined so that $DIR is correctly defined when sourcing.

    Read the article

  • Dynamically specify the type in C#

    - by Lirik
    I'm creating a custom DataSet and I'm under some constrains: I want the user to specify the type of the data which they want to store. I want to reduce type-casting because I think it will be VERY expensive. I will use the data VERY frequently in my application. I don't know what type of data will be stored in the DataSet, so my initial idea was to make it a List of objects, but I suspect that the frequent use of the data and the need to type-cast will be very expensive. The basic idea is this: class DataSet : IDataSet { private Dictionary<string, List<Object>> _data; /// <summary> /// Constructs the data set given the user-specified labels. /// </summary> /// <param name="labels"> /// The labels of each column in the data set. /// </param> public DataSet(List<string> labels) { _data = new Dictionary<string, List<object>>(); foreach (string label in labels) { _data.Add(label, new List<object>()); } } #region IDataSet Members public List<string> DataLabels { get { return _data.Keys.ToList(); } } public int Count { get { _data[_data.Keys[0]].Count; } } public List<object> GetValues(string label) { return _data[label]; } public object GetValue(string label, int index) { return _data[label][index]; } public void InsertValue(string label, object value) { _data[label].Insert(0, value); } public void AddValue(string label, object value) { _data[label].Add(value); } #endregion } A concrete example where the DataSet will be used is to store data obtained from a CSV file where the first column contains the labels. When the data is being loaded from the CSV file I'd like to specify the type rather than casting to object. The data could contain columns such as dates, numbers, strings, etc. Here is what it could look like: "Date","Song","Rating","AvgRating","User" "02/03/2010","Code Monkey",4.6,4.1,"joe" "05/27/2009","Code Monkey",1.2,4.5,"jill" The data will be used in a Machine Learning/Artificial Intelligence algorithm, so it is essential that I make the reading of data very fast. I want to eliminate type-casting as much as possible, since I can't afford to cast from 'object' to whatever data type is needed on every read. I've seen applications that allow the user to pick the specific data type for each item in the csv file, so I'm trying to make a similar solution where a different type can be specified for each column. I want to create a generic solution so I don't have to return a List<object> but a List<DateTime> (if it's a DateTime column) or List<double> (if it's a column of doubles). Is there any way that this can be achieved? Perhaps my approach is wrong, is there a better approach to this problem?

    Read the article

  • C++, using stack.h read a string, then display it in reverse

    - by user1675108
    For my current assignment, I have to use the following header file, #ifndef STACK_H #define STACK_H template <class T, int n> class STACK { private: T a[n]; int counter; public: void MakeStack() { counter = 0; } bool FullStack() { return (counter == n) ? true : false ; } bool EmptyStack() { return (counter == 0) ? true : false ; } void PushStack(T x) { a[counter] = x; counter++; } T PopStack() { counter--; return a[counter]; } }; #endif To write a program that will take a sentence, store it into the "stack", and then display it in reverse, and I have to allow the user to repeat this process as much as they want. The thing is, I am NOT allowed to use arrays (otherwise I wouldn't need help with this), and am finding myself stumped. To give an idea of what I am attempting, here is my code as of posting, which obviously does not work fully but is simply meant to give an idea of the assignment. #include <iostream> #include <cstring> #include <ctime> #include "STACK.h" using namespace std; int main(void) { auto time_t a; auto STACK<char, 256> s; auto string curStr; auto int i; // Displays the current time and date time(&a); cout << "Today is " << ctime(&a) << endl; s.MakeStack(); cin >> curStr; i = 0; do { s.PushStack(curStr[i]); i++; } while (s.FullStack() == false); do { cout << s.PopStack(); } while (s.EmptyStack() == false); return 0; } // end of "main" **UPDATE This is my code currently #include <iostream> #include <string> #include <ctime> #include "STACK.h" using namespace std; time_t a; STACK<char, 256> s; string curStr; int i; int n; // Displays the current time and date time(&a); cout << "Today is " << ctime(&a) << endl; s.MakeStack(); getline(cin, curStr); i = 0; n = curStr.size(); do { s.PushStack(curStr[i++]); i++; }while(i < n); do { cout << s.PopStack(); }while( !(s.EmptyStack()) ); return 0;

    Read the article

  • Using label tags in a validation summary error list?

    - by patridge
    I was thinking about making use of <label> tags in my validation error summary on a failed form submit and I can't figure out if it is going to get me in trouble down the line. Can anyone think of a good reason to avoid this approach? Usability, functionality, design, or other issues are all helpful. I really like the idea of clicking a line item in the error list and being jumped to the offending input element, especially in a mobile HTML scenario where vertical orientation is more common and scrolling is a pain. So far the only problem I can find is that labels don't navigate for radio buttons or checkboxes without individual IDs (Clicking a label for a single ID-tagged radio/checkbox element alters its selection). It doesn't make it any worse than no label, though. Here is a stripped down HTML test sample of this idea (CSS omitted for simplicity). <div class="validation-errors"> <p>There was a problem saving your form.</p> <ul> <li><label for="select1">Select 1 is invalid.</label></li> <li><label for="text1">Text 1 is invalid.</label></li> <li><label for="textarea1">TextArea 1 is invalid.</label></li> <li><label for="radio1">Radio 1 is invalid.</label></li> <li><label for="checkbox1">Checkbox 1 is invalid.</label></li> </ul> </div> <form action="/somewhere"> <fieldset><legend>Some Form</legend> <ol> <li><label for="select1">select1</label> <select id="select1" name="select1"> <option value="value1">Value 1</option> <option value="value2">Value 2</option> <option selected="selected" value="value3">Value 3</option> </select></li> <li><label for="text1">text1</label> <input id="text1" name="text1" type="text" value="sometext" /></li> <li><label for="textarea1">textarea1</label> <textarea id="textarea1" name="textarea1" rows="5" cols="25">sometext</textarea></li> <li><ul> <li><label><input type="radio" name="radio1" value="value1" />Value 1</label></li> <li><label><input type="radio" name="radio1" value="value2" checked="checked" />Value 2</label></li> <li><label><input type="radio" name="radio1" value="value3" />Value 3</label></li> </ul></li> <li><ul> <li><label><input type="checkbox" name="checkbox1" value="value1" checked="checked" />Value 1</label></li> <li><label><input type="checkbox" name="checkbox1" value="value2" />Value 2</label></li> <li><label><input type="checkbox" name="checkbox1" value="value3" checked="checked" />Value 3</label></li> </ul></li> <li><input type="submit" value="Save &amp; Continue" /></li> </ol> </fieldset> </form> The only thing I have added to make the click-capable behavior more obvious is to add a CSS rule for the labels. .validation-errors label { text-decoration: underline; cursor: pointer; }

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >