Search Results

Search found 50600 results on 2024 pages for 'application lifecycle'.

Page 648/2024 | < Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >

  • ActionScript / AIR - One Button Limit (Exclusive Touch) For Mobile Devices?

    - by TheDarkIn1978
    two years ago, when i was developing an application for the iPhone, i used the following built-in system method on all of my buttons: [button setExclusiveTouch:YES]; essentially, if you had many buttons on screen, this method insured that the application wouldn't be permitted do crazy things when several button events firing at the same time as any new button press would cancel all others. problematic: ButtonA and ButtonB are available. each button has a mouse up event which fire a specific animated (tweened) reorganization/layout of the UI. if both button's events are fired at the same time, their events will likely conflict, causing a strange new layout, perhaps a runtime error. solution: application buttons cancel any current pending mouse up events when said button enters mouse down. private function mouseDownEventHandler(evt:MouseEvent):void { //if other buttons are currently in a mouse down state ready to fire //a mouse up event, cancel them all here. } of course it's simple to manually handle this if there are only a few buttons on stage, but managing buttons becomes more and more complicated / bug-prone if there are several / many buttons available. is there a convenience method available in AIR specifically for this functionality?

    Read the article

  • Issue with Running Android Program on Eclipse

    - by Hossein Mobasher
    I downloaded complete Android Environment Development Environment Snapshots from marakana.com. I start eclipse and create new Android project. On the Run Configurations, i created New Configuration to run the application, and set the Target to Automatic and select the AVD that appropriate to run the application. But, when i click on the run icon, it starts the new emulator, and after some minutes, just android emulator be ran and my application doesn't run on it. What do i do to solve the running problem and run my project on emulator ? NOTE 1: Console outputs : [2012-03-07 16:03:49 - New] ------------------------------ [2012-03-07 16:03:49 - New] Android Launch! [2012-03-07 16:03:49 - New] adb is running normally. [2012-03-07 16:03:49 - New] Performing com.android.example.NewActivity activity launch [2012-03-07 16:03:53 - New] Launching a new emulator with Virtual Device 'Device' [2012-03-07 16:04:00 - Emulator] emulator: WARNING: Unable to create sensors port: Unknown error NOTE 2: My Program Source: package com.android.example; import android.app.Activity; import android.os.Bundle; public class NewActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); } } Thanks for your attention :)

    Read the article

  • Best practices for encrytping continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (on waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • What should I do with an over-bloated select-box/drop-down

    - by Tristan Havelick
    All web developers run into this problem when the amount of data in their project grows, and I have yet to see a definitive, intuitive best practice for solving it. When you start a project, you often create forms with tags to help pick related objects for one-to-many relationships. For instance, I might have a system with Neighbors and each Neighbor belongs to a Neighborhood. In version 1 of the application I create an edit user form that has a drop down for selecting users, that simply lists the 5 possible neighborhoods in my geographically limited application. In the beginning, this works great. So long as I have maybe 100 records or less, my select box will load quickly, and be fairly easy to use. However, lets say my application takes off and goes national. Instead of 5 neighborhoods I have 10,000. Suddenly my little drop-down takes forever to load, and once it loads, its hard to find your neighborhood in the massive alphabetically sorted list. Now, in this particular situation, having hierarchical data, and letting users drill down using several dynamically generated drop downs would probably work okay. However, what is the best solution when the objects/records being selected are not hierarchical in nature? In the past, of done this with a popup with a search box, and a list, but this seems clunky and dated. In today's web 2.0 world, what is a good way to find one object amongst many for ones forms? I've considered using an Ajaxifed search box, but this seems to work best for free text, and falls apart a little when the data to be saved is just a reference to another object or record. Feel free to cite specific libraries with generic solutions to this problem, or simply share what you have done in your projects in a more general way

    Read the article

  • Strategies for Synchronizing Data Between a Rails App and iPhone App

    - by jessecurry
    I've written many iPhone Applications that have pulled data from web services and I've worked on synchronizing data between an iPhone App and a Web Application, but I've always felt that there is probably a better way to handle the synchronization. I'd like to know what strategies you have used to synchronize data between your iPhone(read: mobile) Apps and your Rails(read: web) Applications. Are there any strategies that scale particularly well? How have you dealt with large amounts of data? (Do you use paged responses?) How do you make sure that data is not overwritten? Is there a reason to avoid Ruby on Rails? if so, can you suggest an alternative? What is better about the alternative? What strategies have failed? Why do you believe that those strategies failed? I would like to be able to keep all of the data modifications on the server, but the particular application I am about to start work on will need the ability to operate while disconnected from the network. The user will be able to update data on the mobile device and update data through the web application. When the user's mobile device connects to the server any local changes will be pushed to the server.

    Read the article

  • Displaying performance metrics in a modern web app?

    - by Charles
    We're updating our ancient internal PHP application at work. Right now, we gather extensive performance measurements on every pageview, and log them to the database. Additionally, users requested that some of the metrics be displayed at the bottom of the page. This worked out pretty well for us, because the last thing that the application does on every request is include the file containing the HTML footer. The updated parts of the application use an MVC framework and a Dispatch/Request/Response loop. The page footer is no longer the last thing done. In fact, it could very well be the first thing done, before the rest of the page is created. Because we can grab the Response before it's returned to the user, we could try to include placeholders for the performance metrics in the footer and simply replace them with the actual numbers, but this strikes me as a bad idea somehow. How do you handle this in your modern web app? While we're using PHP, I'm curious how it's done in a Ruby/Rails app, and in your favorite Python framework.

    Read the article

  • What's causing this permission's error and how can I work around it?

    - by Scott B
    Warning: move_uploaded_file(/home/site/public_html/wp-content/themes/mytheme/upgrader.zip) [function.move-uploaded-file]: failed to open stream: Permission denied in /home/site/public_html/wp-content/themes/mytheme/uploader.php on line 79 Warning: move_uploaded_file() [function.move-uploaded-file]: Unable to move '/tmp/phptempfile' to '/home/site/public_html/wp-content/themes/mytheme/upgrader.zip' in /home/site/public_html/wp-content/themes/mytheme/uploader.php on line 79 There was a problem. Sorry! Code is below for that line... // permission settings for newly created folders $chmod = 0755; // Ensures that the correct file was chosen $accepted_types = array('application/zip', 'application/x-zip-compressed', 'multipart/x-zip', 'application/s-compressed'); foreach($accepted_types as $mime_type) { if($mime_type == $type) { $okay = true; break; } } $okay = strtolower($name[1]) == 'zip' ? true: false; if(!$okay) { die("This upgrader requires a zip file. Please make sure your file is a valid zip file with a .zip extension"); } //mkdir($target); $saved_file_location = $target . $filename; //Next line is 79 if(move_uploaded_file($source, $saved_file_location)) { openZip($saved_file_location); } else { die("There was a problem. Sorry!"); }

    Read the article

  • Connection Pool Strategy: Good, Bad or Ugly?

    - by Drew
    I'm in charge of developing and maintaining a group of Web Applications that are centered around similar data. The architecture I decided on at the time was that each application would have their own database and web-root application. Each application maintains a connection pool to its own database and a central database for shared data (logins, etc.) A co-worker has been positing that this strategy will not scale because having so many different connection pools will not be scalable and that we should refactor the database so that all of the different applications use a single central database and that any modifications that may be unique to a system will need to be reflected from that one database and then use a single pool powered by Tomcat. He has posited that there is a lot of "meta data" that goes back and forth across the network to maintain a connection pool. My understanding is that with proper tuning to use only as many connections as necessary across the different pools (low volume apps getting less connections, high volume apps getting more, etc.) that the number of pools doesn't matter compared to the number of connections or more formally that the difference in overhead required to maintain 3 pools of 10 connections is negligible compared to 1 pool of 30 connections. The reasoning behind initially breaking the systems into a one-app-one-database design was that there are likely going to be differences between the apps and that each system could make modifications on the schema as needed. Similarly, it eliminated the possibility of system data bleeding through to other apps. Unfortunately there is not strong leadership in the company to make a hard decision. Although my co-worker is backing up his worries only with vagueness, I want to make sure I understand the ramifications of multiple small databases/connections versus one large database/connection pool.

    Read the article

  • AIR File.resolvePath won't work anymore

    - by Palleas
    Hi all, I'm having a very strange issue, it looks like my application can't create file anymore. It works w/ directories, but the so-many-times-used resolvePath() methods doesn't. Here is what I do : var databaseFileContent : File = new File(File.desktopDirectory.nativePath + "/testing"); databaseFileContent.createDirectory(); databaseFileContent.resolvePath("test"); (Here I'm trying on desktop but that's the same w/ applicationStorageDirectory) When I execute this, it works only for the "testing" folder which is actually created, but my file isn't. I tried to create another application, doing this : trace(File.desktopDirectory.resolvePath("maiswtf.db").exists); trace(File.applicationStorageDirectory.resolvePath("wtf.db").exists); Both are displaying "false". Am I missing something here? I have another application with this code : var databaseFileContent : File = File.applicationStorageDirectory.resolvePath(File.separator + "sitra.db"); When I run this one, it works perfectly! My file is created at /sitra.db! Any hints? I thinks I'm going mad :/ Thanks!

    Read the article

  • Where can I find my iPhone app's Core Data persistent store?

    - by Dr Dork
    I'm diving into iPhone development, so I apologize in advance if this is a ridiculous question, but in a new iPad app project using the Core Data framework, here's the generated code for creating the persistentStoreCoordinator... - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (persistentStoreCoordinator != nil) { return persistentStoreCoordinator; } NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"ApplicationName.sqlite"]]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:nil error:&error]) { /* Replace this implementation with code to handle the error appropriately. abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. Typical reasons for an error here include: * The persistent store is not accessible * The schema for the persistent store is incompatible with current managed object model Check the error message to determine what the actual problem was. */ NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return persistentStoreCoordinator; } My questions are... The first time I run the app, is the ApplicationName.sqllite database created automatically if it doesn't exist? If not, when is it created? When data is added to it programmatically? Once the DB does exist, where can I locate the file? I'd like to open it with a different program so I can manually manipulate the data. Thanks so much in advance for your help! I'm going to continue researching these questions right now.

    Read the article

  • IIS to SQL Server kerberos auth issues

    - by crosan
    We have a 3rd party product that allows some of our users to manipulate data in a database (on what we'll call SvrSQL) via a website on a separate server (SvrWeb). On SvrWeb, we have a specific, non-default website setup for this application so instead of going to http://SvrWeb.company.com to get to the website we use http://application.company.com which resolves to SvrWeb and the host headers resolve to the correct website. There is also a specific application pool set up for this site which uses an Active Directory account identity we'll call "company\SrvWeb_iis". We're setup to allow delegation on this account and to allow it to impersonate another login which we want it to do. (we want this account to pass along the AD credentials of the person signed into the website to SQL Server instead of a service account. We also set up the SPNs for the SrvWeb_iis account via the following command: setspn -A HTTP/SrvWeb.company.com SrvWeb_iis The website pulls up, but the section of the website that makes the call to the database returns the message: Cannot execute database query. Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. I thought we had the SPN information set up correctly, but when I check the security event log on SrvWeb I see entries of my logging in, but it seems to be using NTLM and not kerberos: Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Any ideas or articles that cover this setup in detail would be extremely appreciated! If it helps, we are using SQL Server 2005, and both the web and SQL servers are Windows 2003.

    Read the article

  • Android Development-cannot download an image outside of onCreate

    - by murad
    hi everyone...... im new to android development........and i am stuck with a problem...... i am trying to develop an android application that shows the user the location of atms,hotels etc on a google map....i havent started working on the gps yet.as of now the app works something like this....first of all a map loads on which i intend to show the users current location......on clicking on the menu button there are 3 options..... -services -about us -quit on selecting services option the following options are available...... -atm -hospital -hotel etc on selecting the atm option we will be shown a screen displaying some text........ on using the menu for this screen we get the following menu items..... -sbi -canara -hdfc -icici etc my intention is that when the user selects the sbi option a map should load showing the various places where there are sbi atms near where the user is currently...... ......i started out with the google map api but i had to quit because when i select one of the menu options, such as "sbi",the map doesnt load......instead i am getting the error "application failed to load"...basically i was trying to load a map activity from my first map activity......after googling a bit without any results i tried another approach.......i tried to download and view the static map of the location i wanted..it worked.......but when i tried to download the static map when i select an option like before i get the same error..."application failed to load"...then i tried downloading 2 images from inside onCreate....that worked.......i cannot do the same thing outside the onCreate.....for eg.inside the function for the selected option... i have given the link to my code below..... if someone can please look into this it would be of great help to me.........i have been sitting with this problem for days now......and its urgent too.......i have done the project in eclipse....... httpDownload.java --- http://dpaste.com/195981/

    Read the article

  • PowerPoint PlugIn does not read defaults from .dll.config file

    - by Nick T
    I'm working on a very simple PowerPoint plugin, and I'm quite a bit stumped. In my settings.settings file, I have configured a setting "StartPath", which references where the PowerPoint plugin will navigate to using a Browser component. After I compile the application, and run the installer generated by the Setup project, the application is installed and uses the default value in the settings file. However, if I edit the application.dll.config file, the plugin still uses the old values. How can I set things up such that the plugin references the .dll.config file and not its default settings? The code to access the settings is listed below, including the other variants I have tried: //Attempt 1 string location = MyApplication.Properties.Settings.Default.StartPath; //Attempt 2 string location = ConfigurationManager.AppSettings["StartPath"]; //Attempt 3: Configuration element is inaccessible due to its protection level string applicationName = Environment.GetCommandLineArgs()[0] + ".exe"; string exePath = System.IO.Path.Combine(Environment.CurrentDirectory, applicationName); Configuration config = ConfigurationManager.OpenExeConfiguration(exePath); string location = config.AppSettings["StartPath"];

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • C++ MFC server app with sockets crashes and I cannot find the fault, help!

    - by usermeister
    My program has one dialog and two sockets. Both sockets are derived from CAsyncSocket, one is for listening, other is for receiving data from client. My program crashes when client tries to connect to server application and server needs to initialize receiving socket. This is my MFC dialog class. class CFileTransferServerDlg : public CDialog { ... ListeningSocket ListenSock; ReceivingSocket* RecvSock; void OnAccept(); // called when ListenSock gets connection attempt ... }; This is my derived socket class for receiving data that calls parent dialogs method when event is signaled. class ReceivingSocket : public CAsyncSocket { CFileTransferServerDlg* m_pDlg; // for accessing parent dialogs controls virtual void OnReceive(int nErrorCode); } ReceivingSocket::ReceivingSocket() { } This is dialogs function that handles incoming connection attempt when listening socket gets event notification. This is where the crash happens. void CFileTransferServerDlg::OnAccept() { RecvSock = new ReceivingSocket; /* CRASH */ } OR void CFileTransferServerDlg::OnAccept() { ReceivingSocket* tmpSock = new ReceivingSocket; tmpSock->SetParentDlg(this); CString message; if( ListenSock.Accept(*tmpSock) ) /* CRASH */ { message.LoadStringW(IDS_CLIENT_CONNECTED); m_txtStatus.SetWindowTextW(message); RecvSock = tmpSock; } } My program crashes when I try to create a socket for receiving file sent from client application. OnAccept starts when Listening socket signals incoming connection attempt, but my application then crashes. I've tried running it on another computer and connection attempt was succesful. What could be wrong? Error in debug mode: Unhandled exception at 0x009c30e1 in FileTransferServer.exe: 0xC0000005: Access violation reading location 0xccccce58.

    Read the article

  • Should I base my Embedded Linux product on Qt?

    - by Udi
    My company is developing a medical product. One of the components is a pda-like platform that will run embedded linux. We were considering Qt as the UI framework but found out that Qt is a lot more than that (we are not familiar with Qt). In general, the device needs to do the following: 1. Receive measurements over USB HID from another device (USB HID is used for convenience). 2. Process the measurements. 3. Store them in a database. 4. Interact with the user using the device's touch screen lcd. 5. Communicate (wi-fi, tcp-ip) with a central management station that collects the data and configures the device. 6. Include a web server to allow accessing the device via a browser. We intend to program in C++. My questions are: 1. Is that a good choice for such a device? 2. Assuming we choose Qt, how do we build our product? - Do we use Qt just as a GUI framework and write the application code in a separate process (passing messages between Qt and the application process)? - Do we write the entire application inside Qt, using all of the services the tool has to offer? - Another approach?

    Read the article

  • architecture - centraled location for different modules (cms, webapplications, ...) - best practise

    - by NicoJuicy
    Let's just say that i want to create a cms + other online applications. I want them all to integrate into a central location, but they also have to be available seperately (not everyone want's more than the cms solution). Would i create a huge central application that contains all the database, which communicates through a webserice with the "standalone - integrated" modules? Or would i create them seperately and the only thing that the "central" application would do is syncing the information (eg. the cms and another solution can have the same tables (eg. clients or employees). Or do you have another idea? (i know i'm a little vague, but i can't "give" a lot of details because of work - contract). If someone has all the "packages" it should be possible for the central application to integrate all the modules at one place! Or if someone has more than 1 module, it should combine this on the website. What i thought is best, is that the central location contains only the users and their rights (eg. cms - all rights, ...), and the information get synced with every change. (module cms, adding a new client - store locally and send data to the central location, central location - send to modules = table clients updated everywhere) This way it is easy if someone only "bought" a module, they can sync it easily through the complete architecture. I hope i made myself clear!

    Read the article

  • Visual Studio 2012 won't start

    - by David Aleu
    I installed VS2012 Premium from our MSDN subscription and it was working fine the first couple of days but then I installed a few extensions I can't now start VS2012 and it gives the error: Faulting application name: devenv.exe, version: 11.0.50727.1, time stamp: 0x5011ecaa Faulting module name: ntdll.dll, version: 6.1.7601.17725, time stamp: 0x4ec49b8f Exception code: 0xc0000374 Fault offset: 0x000ce6c3 Faulting process id: 0xee8 Faulting application start time: 0x01cd89bb777fc1dd Faulting application path: C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\devenv.exe Faulting module path: C:\Windows\SysWOW64\ntdll.dll I'm running it on Windows 7 64 bit. I've tried to repair, uninstall and install again and nothing. I tried to restore to a previous restore system point but nothing. The extensions I installed I can remember: VS10x Code Map VSCommands Visual SVN Nuget manager (all the above my colleagues have it too and it works fine for them) and: Web Essentials Visual Studio Color Theme Editor SlowCheetah Mobile Ready HTML5 Questions are: Anyone else has had this problem? Is there a way I can uninstall extensions from a command line or software? (I removed the extensions folder but that doesn't do anything) Can I repair the "C:\Windows\SysWOW64\ntdll.dll"? Is it really a problem with this dll? I haven't been able to find any similar issue in other versions and because VS2012 is new doesn't seem to be much information either.

    Read the article

  • Why can't I open a JBoss vfs:/ URL?

    - by skiphoppy
    We are upgrading our application from JBoss 4 to JBoss 6. A couple of pieces of our application get delivered to the client in an unusual way: jars are looked up inside of our application and sent to the client from a servlet, where the client extracts them in order to run certain support functions. In JBoss 4 we would look these jars up with the classloader and find a jar:// URL which would be used to read the jar and send its contents to the client. In JBoss 6 when we perform the lookup we get a vfs:/ URL. I understand that this is from the org.jboss.vfs package. Unfortunately when I call openStream() on this URL and read from the stream, I immediately get an EOF (read() returns -1). What gives? Why can't I read the resource this URL refers to? I've tried trying to access the underlying VFS packages to open the file through the JBoss VFS API, but most of the API appears to be private, and I couldn't find a routine to translate from a vfs:/ URL to a VFS VirtualFile object, so I couldn't get anywhere. I can try to find the file on disk within JBoss, but that approach sounds very failure prone on upgrade. Our old approach was to use Java Web Start to distribute the jars to the client and then look them up within Java Web Start's cache to extract them. But that broke on every minor upgrade of Java because the layout of the cache changed.

    Read the article

  • Save object in CoreData

    - by John
    I am using CoreData with iPhone SDK. I am making a notes app. I have a table with note objects displayed from my model. When a button is pressed I want to save the text in the textview to the object being edited. How do I do this? I've been trying several things but none seem to work. Thanks EDIT: NSManagedObjectContext *context = [fetchedResultsController managedObjectContext]; NSEntityDescription *entity = [[fetchedResultsController fetchRequest] entity]; NSManagedObject *newManagedObject = [NSEntityDescription insertNewObjectForEntityForName:[entity name] inManagedObjectContext:context]; [newManagedObject setValue:detailViewController.textView.text forKey:@"noteText"]; NSError *error; if (![context save:&error]) { /* Replace this implementation with code to handle the error appropriately. abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. */ NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } The above code saves it correctly but it saves it as a new object. I want it to be saved as the one I have selected in my tableView.

    Read the article

  • Best practices for (over)using Azure queues

    - by John
    Hi, I'm in the early phases of designing an Azure-based application. One of the things that attracts me to Azure is the scalability, given the variability of the demand I'm likely to expect. As such I'm trying to keep things loosely coupled so I can add instances when I need to. The recommendations I've seen for architecting an application for Azure include keeping web role logic to a minimum, and having processing done in worker roles, using queues to communicate and some sort of back-end store like SQL Azure or Azure Tables. This seems like a good idea to me as I can scale up either or both parts of the application without any issue. However I'm curious if there are any best practices (or if anyone has any experiences) for when it's best to just have the web role talk directly to the data store vs. sending data by the queue? I'm thinking of the case where I have a simple insert to do from the web role - while I could set this up as a message, send it on the queue, and have a worker role pick it up and do the insert, it seems like a lot of double-handling. However I also appreciate that it may be the case that this is better in the long run, in case the web role gets overwhelmed or more complex logic ends up being required for the insert. I realise this might be a case where the answer is "it depends entirely on the situation, check your perf metrics" - but if anyone has any thoughts I'd be very appreciative! Thanks John

    Read the article

  • Database choices

    - by flobadob
    I have a prickly design issue regarding the choice of database technologies to use for a group of new applications. The final suite of applications would have the following database requirements... Central databases (more than one database) using mysql (myst be mysql due to justhost.com). An application to be written which accesses the multiple mysql databases on the web host. This application will also write to local serverless database (sqlite/firebird/vistadb/whatever). Different flavors of this application will be created for windows (.NET), windows mobile, android if possible, iphone if possible. So, the design task is to minimise the quantity of code to achieve this. This is going to be tricky since the languages used are already c# / java (android) and objc (iphone). Not too worried about that, but can the work required to implement the various database access layers be minimised? The serverless database will hold similar data to the mysql server, so some kind of inheritance in the DAL would be useful. Looking at hibernate/nhibernate and there is linq to whatever. So many choices!

    Read the article

  • Pattern for version-specific implementations of a Java class

    - by Mike Monkiewicz
    So here's my conundrum. I am programming a tool that needs to work on old versions of our application. I have the code to the application, but can not alter any of the classes. To pull information out of our database, I have a DTO of sorts that is populated by Hibernate. It consumes a data object for version 1.0 of our app, cleverly named DataObject. Below is the DTO class. public class MyDTO { private MyWrapperClass wrapper; public MyDTO(DataObject data) { wrapper = new MyWrapperClass(data); } } The DTO is instantiated through a Hibernate query as follows: select new com.foo.bar.MyDTO(t1.data) from mytable t1 Now, a little logic is needed on top of the data object, so I made a wrapper class for it. Note the DTO stores an instance of the wrapper class, not the original data object. public class MyWrapperClass { private DataObject data; public MyWrapperClass(DataObject data) { this.data = data; } public String doSomethingImportant() { ... version-specific logic ... } } This works well until I need to work on version 2.0 of our application. Now DataObject in the two versions are very similar, but not the same. This resulted in different sub classes of MyWrapperClass, which implement their own version-specific doSomethingImportant(). Still doing okay. But how does myDTO instantiate the appropriate version-specific MyWrapperClass? Hibernate is in turn instantiating MyDTO, so it's not like I can @Autowire a dependency in Spring. I would love to reuse MyDTO (and my dozens of other DTOs) for both versions of the tool, without having to duplicate the class. Don't repeat yourself, and all that. I'm sure there's a very simple pattern I'm missing that would help this. Any suggestions?

    Read the article

  • RPC command to initiate a software install

    - by ericmayo
    I was recently working with a product from Symantech called Norton EndPoint protection. It consists of a server console application and a deployment application and I would like to incorporate their deployment method into a future version of one of my products. The deployment application allows you to select computer workstations running Win2K, WinXP, or Win7. The selection of workstations is provided from either AD (Active Directory) or NT Domain (WINs/DNS NetBIOS lookup). From the list, one can click and choose which workstations to deploy the end point software which is Symantech's virus & spyware protection suite. Then, after selecting which workstations should receive the package, the software copies the setup.exe program to each workstation (presumable over the administrative share \pcname\c$) and then commands the workstation to execute setup.exe resulting in the workstation installing the software. I really like how their product works but not sure what they are doing to accomplish all the steps. I've not done any deep investigations into this such as sniffing the network, etc... and wanted to check here to see if anyone is familiar with what I'm talking about and if you know how it's accomplished or have ideas how it could be accomplished. My thinking is that they are using the admin share to copy the software to the selected workstations and then issuing an RPC call to command the workstation to do the install. What's interesting is that the workstations do this without any of the logged in users knowing what's going on until the very end where a reboot is necessary. At which point, the user gets a pop-up asking to reboot now or later, etc... My hunch is that the setup.exe program is popping this message. To the point: I'm looking to find out the mechanism by which one Windows based machine can tell another to do some action or run some program. My programming language is C/C++ Any thoughts/suggestions appreciated.

    Read the article

  • How to use interfaces in exception handling

    - by vikp
    Hi, I'm working on the exception handling layer for my application. I have read few articles on interfaces and generics. I have used inheritance before quite a lot and I'm comfortable with in that area. I have a very brief design that I'm going to implement: public interface IMyExceptionLogger { public void LogException(); // Helper methods for writing into files,db, xml } I'm slightly confused what I should be doing next. public class FooClass: IMyExceptionLogger { // Fields // Constructors } Should I implement LogException() method within FooClass? If yes, than I'm struggling to see how I'm better of using an interface instead of the concrete class... I have a variety of classes that will make a use of that interface, but I don't want to write an implementation of that interface within each class. In the same time If I implement an interface in one class, and then use that class in different layers of the application I will be still using concrete classes instead of interfaces, which is a bad OO design... I hope this makes sense. Any feedback and suggestions are welcome. Please notice that I'm not interested in using net4log or its competitors because I'm doing this to learn. Thank you Edit: Wrote some more code. So I will implement variety of loggers with this interface, i.e. DBExceptionLogger, CSVExceptionLogger, XMLExceptionLogger etc. Than I will still end up with concrete classes that I will have to use in different layers of my application.

    Read the article

< Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >