Search Results

Search found 3491 results on 140 pages for 'chris sears'.

Page 6/140 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • L10N: Trusted test data for Locale Specific Sorting

    - by Chris Betti
    I'm working on an internationalized database application that supports multiple locales in a single instance. When international users sort data in the applications built on top of the database, the database theoretically sorts the data using a collation appropriate to the locale associated with the data the user is viewing. I'm trying to find sorted lists of words that meet two criteria: the sorted order follows the collation rules for the locale the words listed will allow me to exercise most / all of the specific collation rules for the locale I'm having trouble finding such trusted test data. Are such sort-testing datasets currently available, and if so, what / where are they? "words.en.txt" is an example text file containing American English text: Andrew Brian Chris Zachary I am planning on loading the list of words into my database in randomized order, and checking to see if sorting the list conforms to the original input. Because I am not fluent in any language other than English, I do not know how to create sample datasets like the following sample one in French (call it "words.fr.txt"): cote côte coté côté The French prefer diacritical marks to be ordered right to left. If you sorted that using code-point order, it likely comes out like this (which is an incorrect collation): cote coté côte côté Thank you for the help, Chris

    Read the article

  • It&rsquo;s ok to throw System.Exception&hellip;

    - by Chris Skardon
    No. No it’s not. It’s not just me saying that, it’s the Microsoft guidelines: http://msdn.microsoft.com/en-us/library/ms229007.aspx  Do not throw System.Exception or System.SystemException. Also – as important: Do not catch System.Exception or System.SystemException in framework code, unless you intend to re-throw.. Throwing: Always, always try to pick the most specific exception type you can, if the parameter you have received in your method is null, throw an ArgumentNullException, value received greater than expected? ArgumentOutOfRangeException. For example: public void ArgChecker(int theInt, string theString) { if (theInt < 0) throw new ArgumentOutOfRangeException("theInt", theInt, "theInt needs to be greater than zero."); if (theString == null) throw new ArgumentNullException("theString"); if (theString.Length == 0) throw new ArgumentException("theString needs to have content.", "theString"); } Why do we want to do this? It’s a lot of extra code when compared with a simple: public void ArgChecker(int theInt, string theString) { if (theInt < 0 || string.IsNullOrWhiteSpace(theString)) throw new Exception("The parameters were invalid."); } It all comes down to a couple of things; the catching of the exceptions, and the information you are passing back to the calling code. Catching: Ok, so let’s go with introduction level Exception handling, taught by many-a-university: You do all your work in a try clause, and catch anything wrong in the catch clause. So this tends to give us code like this: try { /* All the shizzle */ } catch { /* Deal with errors */ } But of course, we can improve on that by catching the exception so we can report on it: try { } catch(Exception ex) { /* Log that 'ex' occurred? */ } Now we’re at the point where people tend to go: Brilliant, I’ve got exception handling nailed, what next??? and code gets littered with the catch(Exception ex) nastiness. Why is it nasty? Let’s imagine for a moment our code is throwing an ArgumentNullException which we’re catching in the catch block and logging. Ok, the log entry has been made, so we can debug the code right? We’ve got all the info… What about an OutOfMemoryException – what can we do with that? That’s right, not a lot, chances are you can’t even log it (you are out of memory after all), but you’ve caught it – and as such - have hidden it. So, as part of this, there are two things you can do one, is the rethrow method: try { /* code */ } catch (Exception ex) { //Log throw; } Note, it’s not catch (Exception ex) { throw ex; } as that will wipe all your important stack trace information. This does get your exception to continue, and is the only reason you would catch Exception (anywhere other than a global catch-all) in your code. The other preferred method is to catch the exceptions you can deal with. It may not matter that the string I’m passing in is null, and I can cope with it like this: try{ DoSomething(myString); } catch(ArgumentNullException){} And that’s fine, it means that any exceptions I can’t deal with (OutOfMemory for example) will be propagated out to other code that can deal with it. Of course, this is horribly messy, no one wants try / catch blocks everywhere and that’s why Microsoft added the ‘Try’ methods to the framework, and it’s a strategy we should continue. If I try: int i = (int) "one"; I will get an InvalidCastException which means I need the try / catch block, but I could mitigate this using the ‘TryParse’ method: int i; if(!Int32.TryParse("one", out i)) return; Similarly, in the ‘DoSomething’ example, it might be beneficial to have a ‘TryDoSomething’ that returns a boolean value indicating the success of continuing. Obviously this isn’t practical in every case, so use the ol’ common sense approach. Onwards Yer thanks Chris, I’m looking forward to writing tonnes of new code. Fear not, that is where helpers come into it… (but that’s the next post)

    Read the article

  • What's going on with INETA and the Regional Speakers Bureau?

    - by Chris Williams
    For those of you that have been waiting patiently (and not so patiently) I'm happy to say that we're very near completion on some changes/enhancements/improvements that will allow us to finally go live with the INETA Regional Speakers Bureau. I know quite a few of you have already registered, which is great (though some of you may need to come back and update your info) and we've had a few folks submit requests, mostly in a test capacity, but soon we'll be up and live. Here's how it breaks down. Be sure to read this, because things have changed a bit from when we initially announced it. 1. The majority of our speaker/event funding is going into the Regional Speakers Bureau.  The National Bureau still exists, but it's a good bit smaller than it was before, and it's not an "every group" benefit anymore. We'll be using the National Bureau as more of a strategic task force, targeting high impact events and areas that need some community building love from INETA. These will be identified and handled on a case by case basis, and may include more than just user group events. 2. You're going to get more events per group, per year than you did before. Not only are we focusing more resources on this program, but we're also making a lot of efforts to use it more effectively. With the INETA Regional Speakers Bureau, you should be able to get 2-3 INETA speakers per year, on average. Not every geographical area will have exactly the same experience, but we're doing the best we can. 3. It's not a farm team program for the National Bureau. Unsurprisingly, I managed to offend a number of people when I previously made the comment that the Regional Speakers Bureau program was a farm team or stepping stone to the National Bureau. It was a poor choice of words.  Anyone can participate in the Regional Speakers Bureau, and I look forward to working with all of you. 4. There is assistance for your efforts. The exact final details are still being hammered out, but expect it to look something like this: (all distances listed are based on a round trip) Distances < 120 miles = $0 121 miles - 240 miles = $50 (effectively 1 to 2 hours, each way) 241 miles - 360 miles = $100 (effectively 2 to 3 hours, each way) 361 miles - 480 miles = $200 (effectively 3 to 4 hours, each way) For those of you who travel a lot, we're working on a solution to handle group visits when you're away from home. These will (for now) be handled on a case by case basis. 5. We're going to make it as easy as possible to work with the program. In order to do this, we need a few things from you. For speakers, that means your home address. It also means (maybe) filling out a simple 1 line expense report via the INETA website. For user groups, it means making sure your meeting address is up to date as well. 6. Distances will be automatically calculated from your home of record to the user group event and back. We realize that this is not a perfect solution to every instance, but we're not paying you to speak at an event, and you won't be taxed on this money. It's simply some assistance to make your community efforts easier. Our way of saying thanks for everything you do. 7. Sounds good so far, what's the catch? There's always a catch, right? In this case there are two of them: 1) At this time, Microsoft employees are welcome to use the website to line up speaking engagements with user groups, but are not eligible for financial assistance. 2) Anyone can register and use the website to line up speaking engagements with user groups, however you must receive and maintain a net score of 3+ positive ratings (we're implementing a thumbs up / thumbs down system) in order to receive financial assistance. These ratings are provided by the User Group leaders after the meeting has taken place. 8. Involvement by the User Group leaders is a key factor in the success of this program. Your job isn't done once you request a speaker. After you've had your meeting, it's critical that you go back to the website and take a very small survey. Doing this ensures that the speaker gets rated (and compensated if eligible) and also ensures that you can make another request, since you won't be able to make a new request if you have an old one outstanding. 9. What about Canada? We're definitely working on that. Unfortunately nothing new to report on that front, other than to say that we're trying. So... this is where things stand currently. We're working very quickly to get this in place and get speakers and groups together. If you have any questions, please leave a comment below and I'll answer them as quickly as possible. If I've forgotten anything, or if things change, I'll update it here. Thanks, Chris G. Williams INETA Board of Directors

    Read the article

  • InSync12 and Australia Visits: UX is Global, Regional, Everywhere!

    - by ultan o'broin
    I attended the Australian Oracle User Group (AUSOUG) and Quest International User Group's InSync12 event in Melbourne, Australia: the user group conference for Oracle products in the ANZ region. I demoed Oracle Fusion Applications and then presented how Oracle crafted the world class Fusion Apps user experience (UX). I explained about the Oracle user experience design pattern strategy of uptake for all apps, not just Fusion, and what our UX pattern externalization strategy means for customers, partners, and ADF developers. A great conference, lots of energy, the InSync12 highlights for me were Oracle's Senior Vice President Cliff Godwin’s fast-moving Oracle E-Business Suite (EBS) roadshow with the killer Oracle Endeca user experience uptake, and Oracle ADF product outreachmeister Chris Muir’s (@chriscmuir) session on Oracle ADF Mobile solution and his hands-on mobile app development showing how existing ADF/JDev skills can build a secure, code once-deploy-to-many-device hybrid app solution in minutes. Cliff Godwin shows off the Oracle Endeca integration with Oracle E-Business Suite. Chris Muir talked the talk and then walked the walked with Oracle ADF Mobile. Applications UX was mixing it up with the crowd at InSync12 too, showing off cool mobile UX solutions, gathering data for future innovations, and engaging with EBS, JD Edwards, and PeopleSoft apps customers and partners. User conferences such as InSync12 are an important part of our Oracle Applications UX user-centered design process, giving real apps users the opportunity to make real inputs and a way for us to watch and to listen to their needs and wants and get views on current and emerging UX too. Eric Stilan (@icondaddy) of Applications UX uses an iPad to gather feedback on the latest UX designs from conference attendees. While in Melbourne, I also visited impressive Oracle partner, Callista for a major ADF and UX pow-wow, and was the er, star of a very proactive event hosted by another partner Park Lane Information Technology (coordinated by Bambi Price (@bambiprice) of ODTUG) where I explained what UX is about, and how partner and customers can engage, participate and deploy that Applications UX scientific insight to advantage for their entire business. I also paired up with Oracle Australia in Sydney to visit key customers while there, and back at Oracle in Melbourne I spoke with sales consultants and account managers about regional opportunities and UX strategy, and came away with an understanding of what makes the Oracle market tick in Australia. Mobile worker solution development and user experience is hot news in Australia, and this was a great opportunity to team up with Chris Muir and show how the alignment of the twin stars of UX design patterns and ADF technology enables developers to make great-looking, usable apps that really sparkle. Our UX design patterns--or functional (UI) patterns, to use the developer world language--means that developers now have not only a great tool set to build apps on Oracle ADF/FMW but proven, tested usability solutions to solve common problems they can apply in the IDE too. In all, a whirlwind UX visit, packed with events and delivery opportunities, and all too short a time in the wonderful city of Melbourne. I need to get back there soon! For those who need a reminder, there's a website explaining how to get involved with, and participate in, Applications User Experience (including the Oracle Usability Advisory Board) events and programs. Thank you to AUSOUG, Quest, InSync, Callista, Park Lane IT, everyone at Oracle Australia, Chris Muir, and all the other people who came together to make this a productive visit. Stay tuned for more UX developments and engagements in the region on the Oracle VoX blog and Usable Apps website too!

    Read the article

  • Time Machine (OSX) doesn't back up files in Mount Point or Disk Image File

    - by Chris
    I found this Q&A (http://superuser.com/questions/148849/backup-mounted-drive-of-an-image-in-time-machine) and this prompted me to ask the following question: I have two disk images which are scripted to be mounted on login. These two disk images are always mounted to the same location. These two disk images are encrypted TrueCrypt volumes. Time Machine (TM) will only back up the disk images the first time they are mounted, but not after that. As I modify documents within the volumes throughout the day, the modified timestamps are adjusted properly. However, TM does not back them up. TM never backs up the mount points which are two folders within my home directory. Any ideas as to why neither the mount point or the image files are backed up? Do the image files have to be closed (unmounted) after being modified for TM to back them up? Thanks, Chris

    Read the article

  • How can I find the Windows domain logon name of a user from within Outlook 2010?

    - by Chris Farmer
    I need to figure out someone's login name for our domain, and I'd like to be able to do this from within Outlook 2010. I used to be able to do this from Outlook 2007 by right-clicking the user's name in an email message that they'd sent me, and clicking "Outlook Properties..." in the context menu. That would bring up this dialog, which contained what I need in the "alias" field: Now I've installed Outlook 2010. I want to do the same thing, but I can't seem to find a corresponding field. First, I don't see an explicit "Outlook Properties" menu option anymore, and what I think is the corresponding dialog looks completely different: It seems weird that, although I'm looking at the properties of my own name in the same email message in 2007 and 2010 in these screenshots, my name is shown differently in each -- Chris versus Christopher. That makes me think that Outlook isn't really looking in the same place to get this info in each case. So, can I get that "alias" field from within Outlook 2010?

    Read the article

  • UDP flooding multiple servers

    - by Chris Gurney
    What do you suggest? Being UDP flooded as I write to multiple servers in different data centers in 5 different countries . Up to 250,000 packets a second. I believe Cisco routers 5505 would not handle that - (some of our datacenter hosters can offer them. Some have no firewalls to offer.) Our clients naturally have constant disconnects to the server they are on. Hacker started this about three weeks ago. Sometimes for a few hours - up to a few days. If we can't stop it hitting the server with firewalls then how do we stop the hacker - now there is the challenge! Update : Found some of the data centers offer up to 10 firewall rules but would their routers be able to handle the possible volume I am talking about? Thanks Chris

    Read the article

  • HP DL160 vs DL360 for virtualization

    - by Chris
    Hi, We're planning to consolidate our infrastructure (a dozen servers). We'll buy two or three identical new servers who will be setup as a XenServer pool. Load balancing and HA tools will monitor the pool and vMotion VMs in case of failure/overload. I know that the DL100 series of HP servers is cheaper than the DL300 serie (in every sense of the word). As we don't need local storage (we have a SAN) and can live with a temporary down server (provided that Xen Server HA tools work as advertized), what are the downsides going with the DL100 serie ? Thanks, Chris

    Read the article

  • Why aren't connections released by the tomcat AJP connector

    - by Chris
    I have here a jboss with a web application. The tomcat is configured to use the ajp connector. Incoming connections are tunneled via an apache reverse proxy to the connector. Now I recognized that under heavy load the connector keeps a bunch of connections in "keep alive" mode for eternity and doesn't release them any more. With the normal HTTP connector the app did well, but now with the ajp connector we have regular app stallments. Can someone give me some advice where to start to look to resolve this issue? Why does the connector not release the connection again after idling for 300 secs? thanks, chris

    Read the article

  • Time Machine (OSX) doesn't back up files in Mount Point or Disk Image File

    - by Chris
    Hi all, I found this Q&A (http://superuser.com/questions/148849/backup-mounted-drive-of-an-image-in-time-machine) and this prompted me to ask the following question: I have two disk images which are scripted to be mounted on login. These two disk images are always mounted to the same location. These two disk images are encrypted TrueCrypt volumes. Time Machine (TM) will only back up the disk images the first time they are mounted, but not after that. As I modify documents within the volumes throughout the day, the modified timestamps are adjusted properly. However, TM does not back them up. TM never backs up the mount points which are two folders within my home directory. Any ideas as to why neither the mount point or the image files are backed up? Do the image files have to be closed (unmounted) after being modified for TM to back them up? Thanks, Chris

    Read the article

  • Enabling 8021q on a nic

    - by Chris Phillips
    Hi, I'm trying to get a vlan interface on a bonded nic (Centos 5.5) and whilst the interface has been very happily created with vconfig I'm seeing no traffic on it at all. Running tcpdump and tshark on the underlying eth0 I see no sign at all of vlan tags in the traffic, and I'm wondering if there's somethign I'm missing on the server side as the network dept say they are sending me the tagged data. I've got the 8021q module loaded, however under lsmod it shows it's only being used by the cxgb3 module, for an unused onboard iSCSI card, whereas my nics (on an HP DL380 G7) are driven by bnx2 and e1000e modules. Should these modules be listing 8021q as used module? should I have something conrete in /etc/modprobe.conf? Thanks Chris

    Read the article

  • How backward compatible are the HSDPA mobile networks

    - by Chris Kimpton
    Hi, I have got this Huawei wifi device, which has been unlocked for other networks. Works fine in UK on Vodafone (as well as 3). We are trying to get it to work with the Claro network in Jamaica. It connects and stays connected, but fails to get a 3g connection, just the slow EDGE one. Claro support say its because Claro currently does not support the 2100MHz frequency for 3G, which is what the device uses Does that sound correct? They say I need one that: Ensure however that these devices can use the 850MHz frequency. My understanding was that the device supports up to 2100, including their 850mhz... I am thinking that maybe the APN is incorrect, but I have set it to the only value I can find on the net, namely: internet.ideasclaro.com.jm Thanks in advance, Chris

    Read the article

  • FTP an entire folder via the command line

    - by Chris Southam
    I'm looking so migrate some websites to a new server. I have SSH access to the current one but only FTP access to the new one. How can I via Centos and SSH copy entire folders to the new server via FTP? I can log into the new server via the built in FTP client on Centos but I'm not sure of the commands after this. I need to move the httpdocs folder from the old to the public_html folder of the new server. I'd love to do this server to server as it'll be a lot quicker than download - upload via my broadband connection. Yours, Chris

    Read the article

  • Ubuntu 9.0.4 Presario S4000NX Fan Speed

    - by Chris C
    I recently install Ubuntu 9.0.4 on a Presario S4000NX and the CPU fan speed is kept at max. With Windows XP installed the fan speed would increase/decrease as required. I've tried to install lm-sensors and run the sensors-detect. It recommended that I load the modules which I did: smsc47m192 i2c-i801 When running sensor-detect it gave me this strange message: Trying family SMSC Found SMSC LPC47M15x/192/997 Super IO Fan Sensors (but not activated) Running the sensors command gives me a list of voltages and CPU and temperature but doesn't list any fans. After doing some Internet research I then tried to load the smsc47m1 module but I get the following error: FATAL: Error inserting smsc47m1 (/lib/modules/2.6.28-15-generic/kernel/drivers/hwmon/smsc47m1.ko): no such device The file smsc47m1.ko does exist in the listed folder. Any suggestions for getting the fan speed (and the noise) down in Ubuntu? Thanx. - Chris P.S. - I would have put better tags but Server Fault wouldn't let me.

    Read the article

  • Client outlook account deleted on client system, Need to recover e-mails that are on server back cli

    - by chris
    Hello, I have 2 client compters running XP and MS office with Outlook 2007. I have a 2003 server running exchange. Clients e-mail account was removed on clients computer. But the e-mails are still on server as I see the mailbox with approx 14k of mail How do I restore the e-mail account on clients and retrieve the e-mail from server. I did not set up the outlook or exchange, so I do not know the e-mail settings. Any help? Chris

    Read the article

  • Installing Apache2 and PHP5 on a XServe G5 running 10.4

    - by Chris
    Hello- I have an Apple XServe G5, running OS X Server 10.4. I want to update my Apache installation from 1.3 to 2 and PHP to 5. I also want to install PHP GD support. I have scoured the internet for a guide on how to do this, but to no avail. I also tried to use Entropy to install PHP 5 several times, and it always manages to royally mess up my system. Obviously I can't install Leopard or Snow Leopard because it is a PowerPC processor. Can anyone give me any tips on how to get this software updated or point me to a guide? Thanks, Chris

    Read the article

  • Refresh devices - reconnect CF card drive by script (unplug-plug equivalent)

    - by Chris
    Hello I plug a completely clean CF-card into my USB card-writer. Then I dd a mbr block of 512 bytes size to the device, which contains the partition table and the definition of one partition. Problem: While "fdisk -l /dev/sdx" correctly displays the partition, it happens that there is no device like "/dev/sdx1" after these operations (as it was not present before). Unplugging and plugging the card-writer solves the problem and makes the device(s) appear. Since I use this procedure in a script, manually unplugging and re-plugging is no option whatsoever. Is there a way to "refresh" the devices or to "unplug and re-plug" the drive by script such that /dev/sdx1 appears? Thanks for any help, Chris

    Read the article

  • Migration of egroupware from one Ubuntu server to another

    - by Chris Schmidt
    I am new to server administration and have been attempting to migrate from one Ubuntu server to another for 4 days now. I am having a problem with migration of egroupware settings. Specifically, I need to find where knowledgebase saves the files on the local machine. I have the database set correctly and its using the previous settings. However, articles in knowledgebase are not showing images associated with them. I have tried everything to find the file on the old server that stores data files uploaded to the knowlegebase but I cannot. If there is another way to import these files, or if someone knows where they are saved, I would really appreciate the help. -Chris

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    Hi I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks Thanks, Chris

    Read the article

  • Neo4J and Azure and VS2012 and Windows 8

    - by Chris Skardon
    Now, I know that this has been written about, but both of the main places (http://www.richard-banks.org/2011/02/running-neo4j-on-azure.html and http://blog.neo4j.org/2011/02/announcing-neo4j-on-windows-azure.html) utilise VS2010, and well, I’m on VS2012 and Windows 8. Not that I think Win 8 had anything to do with it really, anyhews! I’m going to begin from the beginning, this is my first foray into running something on Azure, so it’s been a bit of a learning curve. But luckily the Neo4J guys have got us started, so let’s download the VS2010 solution: http://neo4j.org/get?file=Neo4j.Azure.Server.zip OK, the other thing we’ll need is the VS2012 Azure SDK, so let’s get that as well: http://www.windowsazure.com/en-us/develop/downloads/ (I just did the full install). Now, unzip the VS2010 solution and let’s open it in VS2012: <your location>\Neo4j.Azure.Server\Neo4j.Azure.Server.sln One-way-upgrade? Yer! Ignore the migration report – we don’t care! Let’s build that sucker… Ahhh 14 errors… WindowsAzure does not exist in the namespace ‘Microsoft’ Not a problem right? We’ve installed the SDK, just need to update the references: We can ignore the Test projects, they don’t use Azure, we’re interested in the other projects, so what we’ll do is remove the broken references, and add the correct ones, so expand the references bit of each project: hunt out those yellow exclamation marks, and delete them! You’ll need to add the right ones back in (listed below), when you go to the ‘Add Reference’ dialog make sure you have ‘Assemblies’ and ‘Framework’ selected before you seach (and search for ‘microsoft.win’ to narrow it down) So the references you need for each project are: CollectDiagnosticsData Microsoft.WindowsAzure.Diagnostics Microsoft.WindowsAzure.StorageClient Diversify.WindowsAzure.ServiceRuntime Microsoft.WindowsAzure.CloudDrive Microsoft.WindowsAzure.ServiceRuntime Microsoft.WindowsAzure.StorageClient Right, so let’s build again… Sweet! No errors.   Now we need to setup our Blobs, I’m assuming you are using the most up-to-date Java you happened to have downloaded :) in my case that’s JRE7, and that is located in: C:\Program Files (x86)\Java\jre7 So, zip up that folder into whatever you want to call it, I went with jre7.zip, and stuck it in a temp folder for now. In that same temp folder I also copied the neo4j zip I was using: neo4j-community-1.7.2-windows.zip OK, now, we need to get these into our Blob storage, this is where a lot of stuff becomes unstuck - I didn’t find any applications that helped me use the blob storage, one would crash (because my internet speed is so slow) and the other just didn’t work – sure it looked like it had worked, but when push came to shove it didn’t. So this is how I got my files into Blob (local first): 1. Run the ‘Storage Emulator’ (just search for that in the start menu) 2. That takes a little while to start up so fire up another instance of Visual Studio in the mean time, and create a new Console Application. 3. Manage Nuget Packages for that solution and add ‘Windows Azure Storage’ Now you’re set up to add the code: public static void Main() { CloudStorageAccount cloudStorageAccount = CloudStorageAccount.DevelopmentStorageAccount; CloudBlobClient client = cloudStorageAccount.CreateCloudBlobClient(); client.Timeout = TimeSpan.FromMinutes(30); CloudBlobContainer container = client.GetContainerReference("neo4j"); //This will create it as well   UploadBlob(container, "jre7.zip", "c:\\temp\\jre7.zip"); UploadBlob(container, "neo4j-community-1.7.2-windows.zip", "c:\\temp\\neo4j-community-1.7.2-windows.zip"); }   private static void UploadBlob(CloudBlobContainer container, string blobName, string filename) { CloudBlob blob = container.GetBlobReference(blobName);   using (FileStream fileStream = File.OpenRead(filename)) blob.UploadFromStream(fileStream); } This will upload the files to your local storage account (to switch to an Azure one, you’ll need to create a storage account, and use those credentials when you make your CloudStorageAccount above) To test you’ve got them uploaded correctly, go to: http://localhost:10000/devstoreaccount1/neo4j/jre7.zip and you will hopefully download the zip file you just uploaded. Now that those files are there, we are ready for some final configuration… Right click on the Neo4jServerHost role in the Neo4j.Azure.Server cloud project: Click on the ‘Settings’ tab and we’ll need to do some changes – by default, the 1.7.2 edition of neo4J unzips to: neo4j-community-1.7.2 So, we need to update all the ‘neo4j-1.3.M02’ directories to be ‘neo4j-community-1.7.2’, we also need to update the Java runtime location, so we start with this: and end with this: Now, I also changed the Endpoints settings, to be HTTP (from TCP) and to have a port of 7410 (mainly because that’s straight down on the numpad) The last ‘gotcha’ is some hard coded consts, which had me looking for ages, they are in the ‘ConfigSettings’ class of the ‘Neo4jServerHost’ project, and the ones we’re interested in are: Neo4jFileName JavaZipFileName Change those both to what that should be. OK Nearly there (I promise)! Run the ‘Compute Emulator’ (same deal with the Start menu), in your system tray you should have an Azure icon, when the compute emulator is up and running, right click on the icon and select ‘Show Compute Emulator UI’ The last steps! Make sure the ‘Neo4j.Azure.Server’ cloud project is set up as the start project and let’s hit F5 tension mounts, the build takes place (you need to accept the UAC warning) and VS does it’s stuff. If you look at the Compute Emulator UI you’ll see some log stuff (which you’ll need if this goes awry – but it won’t don’t worry!) In a bit, the console and a Java window will pop up: Then the console will bog off, leaving just the Java one, and if we switch back to the Compute Emulator UI and scroll up we should be able to see a line telling us the port number we’ve been assigned (in my case 7411): (If you can’t see it, don’t worry.. press CTRL+A on the emulator, then CTRL+C, copy all the text and paste it into something like Notepad, then just do a Find for ‘port’ you’ll soon see it) Go to your favourite browser, and head to: http://localhost:YOURPORT/ and you should see the WebAdmin! See you on the cloud side hopefully! Chris PS Other gotchas! OK, I’ve been caught out a couple of times: I had an instance of Neo4J running as a service on my machine, the Azure instance wanted to run the https version of the server on the same port as the Service was running on, and so Java would complain that the port was already in use.. The first time I converted the project, it didn’t update the version of the Azure library to load, in the App.Config of the Neo4jServerHost project, and VS would throw an exception saying it couldn’t find the Azure dll version 1.0.0.0.

    Read the article

  • Prevent Win7 boot loader from taking over the WinXP boot loader

    - by Chris
    My setup: 1 physical hard drive (500gb divided equally into 2 partitions) Windows XP Partition (Current OS) Empty Partition where I will be installing Windows 7 My question is how do I prevent the Windows7 boot loader from taking over my WindowsXP boot loader when installing the new OS ? The reason I am asking is because I already have a ghosted backup of my WinXP partition and if I ever need to restore my xp partition using that backup, would it not overwrite the Windows7 boot loader that was placed in the XP partition with the one from the backup, thus making windows 7 unable to boot. Also what would happen if I decided to delete the Windows XP partition altogether somewhere down the road and along with it the Win7 boot loader that was placed there, wouldn't that cause the system not to boot at all.. To avoid these issues, I simply want to make sure that BOTH the Win7 and WinXP boot loaders are available on their respective partitions and they do not interfere with each other in any way. Is this possible? Thx, Chris

    Read the article

  • Updated linux, grub menu not booting to windows partition.

    - by Chris Flynn
    I have just updated my Ubuntu linux to Ubuntu 10.4, not my grub menu isnt letting me boot to Windows Partition. The problem seems to be with grubs new update from using an editable menu.lst file to using a non editable grub.cfg file. Everywhere I look it states "DO NOT EDIT THE GRUB.CGF FILE". I am at a loss as what to do. I figured that the new configuration has screwed up the Windows Boot File. Anyone have any suggestions on how to fix this. I am not sure if it is a windows issue or an issue with the Grub boot menu. Any help would be great. Thanks -Chris Flynn

    Read the article

  • Updated linux, grub menu not booting to windows partition.

    - by Chris Flynn
    I have just updated my Ubuntu linux to Ubuntu 10.4, not my grub menu isnt letting me boot to Windows Partition. The problem seems to be with grubs new update from using an editable menu.lst file to using a non editable grub.cfg file. Everywhere I look it states "DO NOT EDIT THE GRUB.CGF FILE". I am at a loss as what to do. I figured that the new configuration has screwed up the Windows Boot File. Anyone have any suggestions on how to fix this. I am not sure if it is a windows issue or an issue with the Grub boot menu. Any help would be great. Thanks -Chris Flynn

    Read the article

  • Install Git on my Media Temple (dv) 4.0 server

    - by Chris
    I'm trying to install Git on my Media Temple (dv) 4.0 server. I've followed these instructions: http://wiki.mediatemple.net/w/%28dv%29_4.0:Install_GIT It seems to have "installed", as there are a boat load of files in the following directory: /root/git-2012-06-06 However, when I perform any git command in the server: git: command not found My assumption is that something, somewhere isn't configured properly, but I have no idea where to start. Could anybody lend a hand / offer some pointers? (And if you hadn't guessed, I'm pretty new to all this, so please be kind!) Thanks very much Chris

    Read the article

  • Temporary file storage - script for webserver

    - by Chris
    I'm looking for a script (preferably php) which I can use for to temporarily exchange files. I had such a script before (it was written in flash), but - damn - just can't find it anymore. Here the features I'm looking for: I have a logon, which allows me to create "upload spaces". for these "upload spaces" I define a time how long the space will be available for (until the files are deleted), and who can access it - in the means of typing in an email, and that user gets a link with the "online space", user ID and password. the other user then clicks on the link and uploads the file, and I get (preferably) an email should there be any file changes I get an email before the content gets deleted Now, 3 additional things: - it should be open- source - run on linux (preferably lamp) - no, I dont wanna use dropbox or similar :p Thanks in advance everybody, Cheers Chris

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >