Search Results

Search found 13808 results on 553 pages for 'remote storage'.

Page 141/553 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Is JPA + EJB to much slow (or heavy) for over Internet transactions?

    - by Xavier Callejas
    Hi, I am developing a stand-alone java client application that connects to a Glassfish v3 application for JPA/EJB facade style transactions. In other words, my client application do not connect directly to the database to CRUD, but it transfers JPA objets using EJB stateless sessions. I have scenarios where this client application will be used in an external network connected with a VPN over Internet with a client connection of 512kbp/DSL, and a simple query takes so much time, I'm seeing the traffic graph and when I merge a entity in the client application I see megabytes of traffic (I couldn't believe how a purchase order entity could weight more than 1 mb). I have LAZY fetch in almost every many-to-many relationship, but I have a lot of many-to-one relationships between entities (but this is the great advantage of JPA!). Could I do something to accelerate the the speed of transactions between JPA/EJB server and the remote java client? Thank you in advance.

    Read the article

  • Develop iPhone application remotely?

    - by ANE
    4 java developers are new to iPod Touch/iPhone app development. They have an idea for an app. They have never used Xcode or Macs before. Instead of spending money for a new iMac or Mac Mini for each of them, my boss would like to sell them a $999 Apple server, hosted at a facility connected a single T1 line, and have all 4 people work remotely in Xcode. Is this feasible? Is anyone doing anything like this? Specifically, is 1 T1 enough for realistic remote app development? Would they have to work in black & white via Logmein or Gotomeeting to get decent speed? Can four people work remotely together on an Xcode project at the same time? Do they absolutely need their own Macs to connect their iPod Touches or iPhones physically to, or can they connect to their existing PCs with iTunes and install their in-development apps that way?

    Read the article

  • resend confirm instructions via devise

    - by Paul 'Whippet' McGuane
    what im trying to achieve is that when an admin views a list of members, they can click a link to resend the instructions on how to confirm that members accounts. this is the code im using to try and achieve this = link_to 'Resend Confirmation', confirmation_path(:user => {:email => user.email}), :remote => :true im hoping that this would allow me to pass the users email through to the link have it then sent to that user though the issue im getting is Could not find a valid mapping for {:user=>{:email=>"[email protected]"}}

    Read the article

  • View OS X Desktop on windows remotely at higher resolution then hosting machine

    - by Elijha
    I have a new macbook air 11, which I do some web based programming on, I have a windows box with a 1920x1200 display which I'd like to use to view the mac desktop and keep working at home when I can - taking advantage of the higher resolution screen and full sized keyboard/mouse. I don't think VNC or such is the answer I'm looking for as it would restrict the display to the airs 1366x768 - negating the main benefit more lines of text on screen. From some rudimentary googleing I think I'm after some sort of x-windows / x-11 remote display. But I'm not a Linux user and any discussion seems to be about linux os x or windows Linux setups. Can anyone provide a clear set of instructions on how to do this or an application that can do this.

    Read the article

  • Ways to Unit Test Oauth for different services in ruby?

    - by viatropos
    Are there any best practices in writing unit tests when 90% of the time I'm building the Oauth connecting class, I need to actually be logging into the remote service? I am building a rubygem that logs in to Twitter/Google/MySpace, etc., and the hardest part is making sure I have the settings right for that particular provider, and I would like to write tests for that. Is there a recommended way to do that? If I did mocks or stubs, I'd still have to spend that 90% of the time figuring out how to use the service, and would end up writing tests after the fact instead of before...

    Read the article

  • gitolite mac don't add new user to authorized_keys

    - by crashbus
    I installed gitolite and every thing works fine for me as admin. But when I'd like to add add a new user the new user can't connect to the server. After I looked into the file authorized_keys I saw that the new user wasn't added to the file. During the commit of the new public-key I get some workings: WARNING: split conf not set, gl-conf present for 'gitolite-admin' Counting objects: 6, done. Delta compression using up to 8 threads. Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 882 bytes, done. Total 4 (delta 1), reused 0 (delta 0) remote: WARNING: split conf not set, gl-conf present for 'gitolite-admin' remote: WARNING: ?? @staff christianwaldmann markwelch remote: sh: find: command not found remote: sh: find: command not found remote: sh: sort: command not found remote: sh: find: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: cut: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 23: grep: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: sort: command not found remote: /usr/local/bin/triggers/post-compile/update-gitweb-access-list: line 26: sed: command not found remote: sh: find: command not found remote: sh: find: command not found How can I fix it that gitolite auto-add the new user to the authorized_keys.

    Read the article

  • Django users and authentication from external source

    - by Boldewyn
    I have a Django app that gets it's data completely from an external source (queried via HTTP). That is, I don't have the option for a local database. Session data is stored in the cache (on my development server I use a SQLite database, so that is no error source). I'm using bleeding edge Django 1.1svn. Enter the problem: I want to use Django's own authentication system for the users. It seems quite simple to write my own Authentication Backend, but always just under the condition that you have a local database where to save the users. Without database my main problem is persistence. I tried it with the following (assume that datasource.get() is a function that returns some kind of dict): class ModelBackend (object): """Login backend.""" def authenticate (self, username=None, password=None): """Check, if a given user/password combination is valid""" data = datasource.get ('login', username, password) if data and data['ok']: return MyUser (username=username) else: raise TypeError return None def get_user (self, username): """get data about a specific user""" try: data = datasource.get ('userdata', username) if data and data['ok']: return data.user except: pass return None class MyUser (User): """Django user who isn't saved in DB""" def save (self): return None But the intentionally missing save() method on MyUser seems to break the session storage of a login. How should MyUser look like without a local database?

    Read the article

  • How can I build something like Amazon S3 in Perl?

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe REST (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • Is it possible to read data that has been separately copied to the Android sd card without having ro

    - by icecream
    I am developing an application that needs to access data on the sd card. When I run on my development device (an odroid with Android 2.1) I have root access and can construct the path using: File sdcard = Environment.getExternalStorageDirectory(); String path = sdcard.getAbsolutePath() + File.separator + "mydata" File data = new File(path); File[] files = data.listFiles(new FilenameFilter() { @Override public boolean accept(File dir, String filename) { return filename.toLowerCase().endsWith(".xyz"); }}); However, when I install this on a phone (2.1) where I do not have root access I get files == null. I assume this is because I do not have the right permissions to read the data from the sd card. I also get files == null when just trying to list files on /sdcard. So the same applies without my constructed path. Also, this app is not intended to be distributed through the app store and is needs to use data copied separately to the sd card so this is a real use-case. It is too much data to put in res/raw (I have tried, it did not work). I have also tried adding: <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> to the manifest, even though I only want to read the sd card, but it did not help. I have not found a permission type for reading the storage. There is probably a correct way to do this, but I haven't been able to find it. Any hints would be useful.

    Read the article

  • StorageClientException: The specified message does not exist?

    - by Aaron
    I have a simple video encoding worker role that pulls messages from a queue encodes a video then uploads the video to storage. Everything seems to be working but occasionally when deleting the message after I am done encoding and uploading I get a "StorageClientException: The specified message does not exist." Although the video is processed, I believe the message is reappearing in the queue because it's not being deleted correctly. Is it possible that another instance of the Worker role is processing and deleting the message? Doesn't the GetMessage() prevent other worker roles from picking up the same message? Am I doing something wrong in the setup of my queue? What could be causing this message to not be found on delete? some code... //onStart() queue setup var queueStorage = _storageAccount.CreateCloudQueueClient(); _queue = queueStorage.GetQueueReference(QueueReference); queueStorage.RetryPolicy = RetryPolicies.Retry(5, new TimeSpan(0, 5, 0)); _queue.CreateIfNotExist(); public override void Run() { while (true) { try { var msg = _queue.GetMessage(new TimeSpan(0, 5, 0)); if (msg != null) { EncodeIt(msg); PostIt(msg); _queue.DeleteMessage(msg); } else { Thread.Sleep(WaitTime); } } catch (StorageClientException exception) { BlobTrace.Write(exception.ToString()); Thread.Sleep(WaitTime); } } }

    Read the article

  • Windows Azure - Automatic Load Balancing - partitioning

    - by veda
    I was going through some videos. I found that Windows Azure will group the blobs into partitions based on the partition key and will Automatically Load Balance these partitions on their servers. The partition key for a blob is blob name. Using the blob name, azure will automatically do partitions. Now, My question is that Can I able to make the azure to do partitions based on the Container Name. I wanted my partition key to be container name. For example, I have a storage account. In that I have 2 containers named container1 and container2. In container1, I have 1000 files named 1.txt, 2.txt, 3.txt, ......., 501.txt, 502.txt, ..... 999.txt, 1000.txt and in container2, I have another 1000 files named 1001.txt, 1002.txt, 1003.txt, ......., 1501.txt, 1502.txt, ..... 1999.txt, 2000.txt Now, Will Windows Azure will generate 2000 partitions based on the blob name and serve me through several servers??? Won't it be better if Azure partitions based on the Container name? container1 on one server and conatiner2 on another.

    Read the article

  • How do I use HTML5's localStorage in a Google Chrome extension?

    - by davidkennedy85
    I am trying to develop an extension that will work with Awesome New Tab Page. I've followed the author's advice to the letter, but it doesn't seem like any of the script I add to my background page is being executed at all. Here's my background page: <script> var info = { poke: 1, width: 1, height: 1, path: "widget.html" } chrome.extension.onRequestExternal.addListener(function(request, sender, sendResponse) { if (request === "mgmiemnjjchgkmgbeljfocdjjnpjnmcg-poke") { chrome.extension.sendRequest( sender.id, { head: "mgmiemnjjchgkmgbeljfocdjjnpjnmcg-pokeback", body: info, } ); } }); function initSelectedTab() { localStorage.setItem("selectedTab", "Something"); } initSelectedTab(); </script> Here is manifest.json: { "update_url": "http://clients2.google.com/service/update2/crx", "background_page": "background.html", "name": "Test Widget", "description": "Test widget for mgmiemnjjchgkmgbeljfocdjjnpjnmcg.", "icons": { "128": "icon.png" }, "version": "0.0.1" } Here is the relevant part of widget.html: <script> var selectedTab = localStorage.getItem("selectedTab"); document.write(selectedTab); </script> Every time, the browser just displays null. The local storage isn't being set at all, which makes me think the background page is completely disconnected. Do I have something wired up incorrectly?

    Read the article

  • How to use dd to make splitted ISO images from an storage device?

    - by Gustavo Bandeira
    This is a double question, I just hope it's valid. I need to know how to use dd to make splitted ISO images from some storage device, I'm doing it through SSH: the process is slow and the risk of faling at the mid of the operation (1) is high then I need to know how to make these splitted ISO images from my storage device. (2) I'm searching for some reference on dd, it could be a book or a good website about it for when any doubt arises. 1 - I'm doing it on a ~60GB storage device, it took me a whole day to copy ~10GB from this disk. 2 - For curious people, I'm trying to recover an accidentaly deleted file from an iPod, until now I've been able to make the whole process, I just need to improve it beucase I left it copying the disk yesterday: Today it gave me an error when it was at ~10GB.

    Read the article

  • make a folder/partition on one computer appear as a mass storage device to another?

    - by user137560
    Is there anyway to make a folder or a partition on a computer (Linux or Windows) act like a mass storage device to other computers or devices when connected with a Male-Male USB cable? For example, I have a Windows 7 computer with 2 partitions, C and D. I would then connect that computer to another computer or a Smart TV using a Male-Male USB cable, and the other computer or device recognizes a folder/partition on current computer as a mass storage device. Is this possible? If not, is there any USB switch that can connect an external hard drive or flash drive to both a computer and TV without the need to manually switch them? (I know about some USB switches, but they only support automatic switching with some certain types of printers, not with mass storage)

    Read the article

  • Remotely Schedule and Stream Recorded TV in Windows 7 Media Center

    - by DigitalGeekery
    Have you ever been away from home and suddenly realized you forgot to record your favorite program? Now Windows 7 Media Center, users can schedule recordings remotely from their phones or mobile devices with Remote Potato. How it Works Remote Potato installs server software on the host computer running Windows 7 Media Center. Once the software is installed, we’ll need to do some port forwarding on the router and setup an optional dynamic DNS address. When setup is completed, we will access the application through a web based interface. Silverlight is required for Streaming recorded TV, but scheduling recordings can be done through an HTML interface. Installing Remote Potato Download and install Remote Potato on the Media Center PC. (See download link below) If you plan to stream any Recorded TV, you’ll also want to install the streaming pack located on the same page. It isn’t required to stream all shows, only shows that require the AC3 audio codec. Click Yes to allow Remote Potato to add rules to the Windows Firewall for remote access. You’ll likely need to accept a few UAC prompts. When notified that the rules were added, click OK. Remote Potato will then prompt you to allow administrator privileges to reserve a URL for it’s web server. Click Yes. Remote Potato server will start. Click on the configuration button at the right to to reveal the settings tabs.   One the General tab, you’ll have the option to run Remote Potato on startup and minimized in the System Tray. If you’re running Media Center on a dedicated HTPC, you’ll probably want to enable both startup options. Forwarding Ports on Your Router You’ll need to forward a couple ports on your router. By default, these will be ports 9080 and 9081. In this example we’re using a Linksys WRT54GL router, however, the steps for port forwarding will vary from router to router. On the Linksys configuration page, click on the Applications & Gaming Tab, and then the Port Range Forward tab. Under Application, type in a name of your choosing. In both the Start and End boxes, type the port number 9080. Enter the local IP address of your Media Center computer in the IP address column. Click the check box under Enable. Repeat the process on the next line, but this time use port 9081. When finished, click the Save Settings button. Note: It’s highly recommended that you configure the home computer running Media Center & Remote Potato with a static IP address.   Find your IP Address You’ll need to find the IP address assigned to your router from your ISP. There are many ways to do this but a quick and easy way is to visit a site like checkip.dyndns.org (link available below) The current external IP address of your router will be displayed in the browser.   Dynamic DNS This is an optional step, but  it’s highly recommended. Many routers, such as the Linksys WRT54GL we are using, support Dynamic DNS (DDNS). What Dynamic DNS allows you to do is affiliate your home router’s external IP address to a domain name. Every time your home router is assigned a a new IP address by your ISP, the domain name is updated to point to your new IP address. Remote Potato’s user interface is accessed over the Internet is by connecting to your router’s IP address followed by a colon and the port number. (Ex: XXX.XXX.XXX.XXX:9080) Instead of constantly having to look up and remember an IP address, you can use DDNS along with a 3rd party provider like DynDNS.com, to sign up for a free domain name and configure it to be updated each time your router is assigned a new IP address. Go to the DynDNS.com website (See link at the end of the article) and sign up for a free Domain name. You’ll need to register and confirm by email.   Once you’ve signed in and selected your domain name click Activate Services. You’ll get a confirmation message that your domain name has been activated.    On the Linksys WRT54GL click on the Setup tab an then DDNS. Select DynDNS.org, or TZO.com if you prefer to use their service, from the drop down list.   With DynDNS, you’ll need to fill in your username and password you signed up with at the DynDNS website and the hostname you chose. Note: You can connect over your local network with the IP Address of the computer running Remote Potato followed by a colon and the port number. Ex: 192.168.1.2:9080 Logging in Remote Potato and Recording a Show Once you connect, you’ll see the start page. To view the TV listings, click on TV Guide. You’ll then see your guide listings. There are a few ways to navigate the listings. At the top left, you can click on any of the preset time buttons to jump to  the listings at that time of the day.  Click on the arrows to the right and left of the day and date at the top center to proceed to the previous or next day. Or, jump to a specific day with the date and date buttons at the top right.   To setup a recording, click on a program.   You can choose to record the individual show or the entire series by clicking on Record Show or Record Series.   Remote Potato on Mobile Devices Perhaps the coolest feature of Remote Potato is the ability to schedule recording from your phone or mobile device. Note: For any devices or computers without Silverlight, you will be prompted to view the HTML page. Select Browse Listings. Select your program to record. In the Program Details, select Record Show to record the single episode or Record Series to record all instances of the series. You will then see a red dot on the program listing to indicate that the show is scheduled for recording.   Streaming Recorded TV Click on Recorded TV from the home screen to access your previously recorded TV programs. Click on the selection you wish to stream. Click on Play. If you receive this error message, you’ll need to install the streaming pack for Remote Potato. This is found on the same download page as installation files. (See link below) The Begin from slider allows you to start playback from the start (by default) or a different time of the program by moving the slider. The Quality (bitrate) setting  allows you to choose the quality of the playback. We found the video quality on the Normal setting to be pretty lousy, and Low was just pointless. High was the best overall viewing experience as it provided smooth quality video playback. We experienced significant stuttering during playback using the Ultra High setting.   Click Start when you are ready to begin. When playback begins you’ll see a slider at the top right.   Move the slider left or right to increase or decrease the size of the video. There’s also a button to switch to full screen.   Media Center users who travel frequently or are always on the go will likely find Remote Potato to be a blessing. Since being released earlier this year, updates for Remote Potato have come fast and furious. The latest beta release includes support for streaming music and photos. If you like those nice network TV logos, check out our article on adding TV channel logos to Windows Media Center. Downloads and Links Download Remote Potato and Streaming Pack Find your IP address Sign Up for a Domain Name at DynDNS.com Similar Articles Productive Geek Tips Schedule Updates for Windows Media CenterUsing Netflix Watchnow in Windows Vista Media Center (Gmedia)Add a Sleep Timer to Windows 7 Media CenterStartup Customizations for Media Center in Windows 7Enable Media Streaming in Windows Home Server to Windows Media Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • Blob container creation exception ...

    - by Egon
    I get an exception every time I try to create a container for the blob using the following code blobStorageType = storageAccInfo.CreateCloudBlobClient(); ContBlob = blobStorageType.GetContainerReference(containerName); //everything fine till here ; next line creates an exception ContBlob.CreateIfNotExist(); Microsoft.WindowsAzure.StorageClient.StorageClientException was unhandled Message="One of the request inputs is out of range." Source="Microsoft.WindowsAzure.StorageClient" StackTrace: at Microsoft.WindowsAzure.StorageClient.Tasks.Task1.get_Result() at Microsoft.WindowsAzure.StorageClient.Tasks.Task1.ExecuteAndWait() at Microsoft.WindowsAzure.StorageClient.TaskImplHelper.ExecuteImplWithRetry[T](Func2 impl, RetryPolicy policy) at Microsoft.WindowsAzure.StorageClient.CloudBlobContainer.CreateIfNotExist(BlobRequestOptions options) at Microsoft.WindowsAzure.StorageClient.CloudBlobContainer.CreateIfNotExist() at WebRole1.BlobFun..ctor() in C:\Users\cloud\Documents\Visual Studio 2008\Projects\CloudBlob\WebRole1\BlobFun.cs:line 58 at WebRole1.BlobFun.calling1() in C:\Users\cloud\Documents\Visual Studio 2008\Projects\CloudBlob\WebRole1\BlobFun.cs:line 29 at AzureBlobTester.Program.Main(String[] args) in C:\Users\cloud\Documents\Visual Studio 2008\Projects\CloudBlob\AzureBlobTester\Program.cs:line 19 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: System.Net.WebException Message="The remote server returned an error: (400) Bad Request." Source="System" StackTrace: at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult) at Microsoft.WindowsAzure.StorageClient.EventHelper.ProcessWebResponse(WebRequest req, IAsyncResult asyncResult, EventHandler1 handler, Object sender) InnerException: Do you guys knw what is it that I am doing wrong ?

    Read the article

  • Debugging Cactus Tests in Eclipse

    - by Th3sandm4n
    Side note: This is inherited code, I didn't do any of the setup and am new to the project. I'm trying to set up remote debugging in Eclipse for these unit tests that use Cactus. I've read around a bit (but I can't seem to find any REAL information how to set this up). Closest I've found is here (http://www.eclipse.org/webtools/community/tutorials/CactusInWTP/CactusInWTP.html), but it just says to Debug - Debug on Server, but nowhere does it say where the debug port is set or anything, and I can't find anything on how to enable this, set it. Just asking to see if anyone has set this up before, it would really help stepping through the code rather than just logging. The plugin (http://jakarta.apache.org/cactus/integration/eclipse/runner_plugin.html) Looks promising, but I also don't even know where to download it, it doesn't link to a location -.- The project uses ant, cactus, and I'm using Eclipse. Thanks EDIT Here is the target I'm using <junit fork="no" forkmode="perTest" printsummary="yes" haltonfailure="no" haltonerror="no" failureproperty="tests.failed"> <jvmarg value="-Xdebug" /> <jvmarg value="-Xrunjdwp:transport=dt_socket,address=localhost:8005,server=y,suspend=y" /> <formatter type="xml" usefile="true" /> <formatter type="plain" usefile="false" /> <classpath> <pathelement location="${clover.jar}"/> <path refid="cactus.classpath.id" /> <pathelement location="../ejb/src" /> </classpath> <sysproperty key="cactus.contextURL" value="${cactus.contextURL}"/> <test name="com.test.AllTests" outfile="TESTS" /> </junit>

    Read the article

  • NIS password mapping question

    - by papoyan
    I have NIS server with user "techsupport", which has uid/gid = 517 I've configured NIS and NFS on that server, as well as NFS/NIS client on the remote web server. Now I need to techsupport user to be able to login to web server using techsupport username, but HAVE root privileges. I need this, so I can easily track, which support agent doing what on the web server. Everything works fine, when from NIS server, I ssh to the web server with tech support user nisserver# ssh [email protected] I can authenticate against the NIS server just fine, and my home directory that is on NIS server, get's mounted on web server just fine. The Only two problems I have are : my GID on web server is webserver# id uid=517(techsupport) gid=517(client_jonny) groups=517(client_jonny) (as you can see, that it picked up gid of a client that exists on the web server, since it's same number) I need to make sure, that my "techsupport" user has ROOT privileges. How can I achieve this? I remember that I've seen identical results elsewhere, but LDAP was used, is there a way to achieve this with NIS/NFS setup? Thank you in advance,

    Read the article

  • Testing a wide variety of computers with a small company

    - by Tom the Junglist
    Hello everyone, I work for a small dotcom which will soon be launching a reasonably-complicated Windows program. We have uncovered a number of "WTF?" type scenarios that have turned up as the program has been passed around to the various not-technical-types that we've been unable to replicate. One of the biggest problems we're facing is that of testing: there are a total of three programmers -- only one working on this particular project, me -- no testers, and a handful of assorted other staff (sales, etc). We are also geographically isolated. The "testing lab" consists of a handful of VMWare and VPC images running sort-of fresh installs of Windows XP and Vista, which runs on my personal computer. The non-technical types try to be helpful when problems arise, we have trained them on how to most effectively report problems, and the software itself sports a wide array of diagnostic features, but since they aren't computer nerds like us their reporting is only so useful, and arranging remote control sessions to dig into the guts of their computers is time-consuming. I am looking for resources that allow us to amplify our testing abilities without having to put together an actual lab and hire beta testers. My boss mentioned rental VPS services and asked me to look in to them, however they are still largely very much self-service and I was wondering if there were any better ways. How have you, or any other companies in a similar situation handled this sort of thing? EDIT: According to the lingo, our goal here is to expand our systems testing capacity via an elastic computing platform such as Amazon EC2. At this point I am not sure suggestions of beefing up our unit/integration testing are going to help very much as we are consistently hitting walls at the systems testing phase. Has anyone attempted to do this kind of software testing on a cloud-type service like EC2? Tom

    Read the article

  • Why are my Flex resource bundles not being loaded?

    - by Chris R
    I have an Actionscript module in the flex source folder filterModules, which is one of two additional source folders in my project (the main source folder is reports, but I'm not dealing with anything in there right now). Here's the MXML content that references the resources. ... This array is assigned to the dataProvider field of a ComboBox. It's not bound using the bindings, presumably for reasons that made sense to the original developer, and it'd be nontrivial to change the class to make that happen. I additionally have a resource property file in a folder resources/en_US and I have the source folder resources/{locale} in the project source settings. My additional compiler options are -locale en_US. The resource property file is resources/en_US/labels.properties (All paths are relative to the flash builder project root) and contains (amongst other things) these keys: metric.q3 = Overall Satisfaction metric.q5 = Personnel metric.q9a = Issue Resolution metric.q42 = Visit Duration Sat metric.q34 = Visit Duration I have written some FlexUnit tests that run in my local Flash Player that exercise these resources -- they check that every label is represented in the metrics array, for example, so I know that the resource file is loaded when run locally. However, when I copy the module .swf file over to my server, the combo box to which the array is assigned is empty. I copy the .swf like so, if it matters: rsync -rlDv --inplace -T /tmp ~/projects/flex_reports/bin-debug/rankingFilter.swf HOSTNAME:WEBROOT/flashPath/ Why is this? I am not able to debug the remote module because our surrounding site sets up a lot of context and makes some database calls to determine which module to load. I'm hoping to get some pointers on why resource bundles might not show up. I'd understand it if the array was present with wrong labels, but the array is instead completely empty, which is pretty odd.

    Read the article

  • How to rdc to a particular machine that is member of a TS Farm?

    - by Amit Arora
    I created a Terminal Services farm comprising of 3 TS hosts (say, TS1, TS2 and TS3) running Windows 2008 R2 Enterprise, a TS Connection broker and a TS Gateway for the purpose of hosting a windows application as a TS RemoteApp. The setup works just fine. Now, I want to do some further configuration changes on a particular TS host, say TS2 and not on any other TS host. I try to rdc to TS2 but I find myself getting connected to a randomly chosen TS host (sometimes TS1, sometimes TS2, and at other times, TS3). I think rdc connection is also going via the Connection Broker that is forwarding me to a TS host it decides is best. Is there a way I can deterministically connect to a particular TS host using rdc? I don't have option to login locally on a TS host as the entire setup is hosted in a remote data center. I think this is a very common scenario and must have a straight forward solution. It could be as easy as doing rdc to Connection Broker server and disabling it for a while, but I don't know how to do that too. Any help will be highly appreciated.

    Read the article

  • How to debug problems in Linux kernel module `init()`?

    - by Kimvais
    I am using remote (k)gdb to debug a problem in a module that causes a panic when loaded e.g. when init() is called. The stack trace just shows that do_one_initcall(mod->init) causes the crash. In order to get the symbol file loaded in the gdb, I need to get the address of the module text section, and to get that I need to get the module loaded. Because the insmod in busybox (1.16.1) doesn't support -m so I'm stuck to grep modulename /proc/modules + adding the offset from nm to figure out the address. So I'm facing a sort a of a chicken and an egg problem here - to be able to debug the module loading, I need to get the module loaded - but in order to get the module loaded, I need to debug the problem... So I am currently thinking about two options - is there a way to get the address information either: by printk() in the module init code by printk() somewhere in the kernel code all this prior to calling the mod->init() - so I could place a breakpoint there, load the symbol file, hit c and see it crash and burn...

    Read the article

  • Generic class for performing mass-parallel queries. Feedback?

    - by Aaron
    I don't understand why, but there appears to be no mechanism in the client library for performing many queries in parallel for Windows Azure Table Storage. I've created a template class that can be used to save considerable time, and you're welcome to use it however you wish. I would appreciate however, if you could pick it apart, and provide feedback on how to improve this class. public class AsyncDataQuery<T> where T: new() { public AsyncDataQuery(bool preserve_order) { m_preserve_order = preserve_order; this.Queries = new List<CloudTableQuery<T>>(1000); } public void AddQuery(IQueryable<T> query) { var data_query = (DataServiceQuery<T>)query; var uri = data_query.RequestUri; // required this.Queries.Add(new CloudTableQuery<T>(data_query)); } /// <summary> /// Blocking but still optimized. /// </summary> public List<T> Execute() { this.BeginAsync(); return this.EndAsync(); } public void BeginAsync() { if (m_preserve_order == true) { this.Items = new List<T>(Queries.Count); for (var i = 0; i < Queries.Count; i++) { this.Items.Add(new T()); } } else { this.Items = new List<T>(Queries.Count * 2); } m_wait = new ManualResetEvent(false); for (var i = 0; i < Queries.Count; i++) { var query = Queries[i]; query.BeginExecuteSegmented(callback, i); } } public List<T> EndAsync() { m_wait.WaitOne(); return this.Items; } private List<T> Items { get; set; } private List<CloudTableQuery<T>> Queries { get; set; } private bool m_preserve_order; private ManualResetEvent m_wait; private int m_completed = 0; private void callback(IAsyncResult ar) { int i = (int)ar.AsyncState; CloudTableQuery<T> query = Queries[i]; var response = query.EndExecuteSegmented(ar); if (m_preserve_order == true) { // preserve ordering only supports one result per query this.Items[i] = response.Results.First(); } else { // add any number of items this.Items.AddRange(response.Results); } if (response.HasMoreResults == true) { // more data to pull query.BeginExecuteSegmented(response.ContinuationToken, callback, i); return; } m_completed = Interlocked.Increment(ref m_completed); if (m_completed == Queries.Count) { m_wait.Set(); } } }

    Read the article

  • Objective-C memory management issue

    - by Toby Wilson
    I've created a graphing application that calls a web service. The user can zoom & move around the graph, and the program occasionally makes a decision to call the web service for more data accordingly. This is achieved by the following process: The graph has a render loop which constantly renders the graph, and some decision logic which adds web service call information to a stack. A seperate thread takes the most recent web service call information from the stack, and uses it to make the web service call. The other objects on the stack get binned. The idea of this is to reduce the number of web service calls to only those appropriate, and only one at a time. Right, with the long story out of the way (for which I apologise), here is my memory management problem: The graph has persistant (and suitably locked) NSDate* objects for the currently displayed start & end times of the graph. These are passed into the initialisers for my web service request objects. The web service call objects then retain the dates. After the web service calls have been made (or binned if they were out of date), they release the NSDate*. The graph itself releases and reallocates new NSDates* on the 'touches ended' event. If there is only one web service call object on the stack when removeAllObjects is called, EXC_BAD_ACCESS occurs in the web service call object's deallocation method when it attempts to release the date objects (even though they appear to exist and are in scope in the debugger). If, however, I comment out the release messages from the destructor, no memory leak occurs for one object on the stack being released, but memory leaks occur if there are more than one object on the stack. I have absolutely no idea what is going wrong. It doesn't make a difference what storage symantics I use for the web service call objects dates as they are assigned in the initialiser and then only read (so for correctness' sake are set to readonly). It also doesn't seem to make a difference if I retain or copy the dates in the initialiser (though anything else obviously falls out of scope or is unwantedly released elsewhere and causes a crash). I'm sorry this explanation is long winded, I hope it's sufficiently clear but I'm not gambling on that either I'm afraid. Major big thanks to anyone that can help, even suggest anything I may have missed?

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >