Search Results

Search found 27496 results on 1100 pages for 'distributed source ctrl'.

Page 531/1100 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • Passing data from one database to another database table (Access) (C#)

    - by SAMIR BHOGAYTA
    string conString = "Provider=Microsoft.Jet.OLEDB.4.0 ;Data Source=Backup.mdb;Jet OLEDB:Database Password=12345"; OleDbConnection dbconn = new OleDbConnection(); OleDbDataAdapter dAdapter = new OleDbDataAdapter(); OleDbCommand dbcommand = new OleDbCommand(); try { if (dbconn.State == ConnectionState.Closed) dbconn.Open(); string selQuery = "INSERT INTO [Master] SELECT * FROM [MS Access;DATABASE="+ "\\Data.mdb" + ";].[Master]"; dbcommand.CommandText = selQuery; dbcommand.CommandType = CommandType.Text; dbcommand.Connection = dbconn; int result = dbcommand.ExecuteNonQuery(); } catch(Exception ex) {}

    Read the article

  • TechDays 2010 Portugal - The Day After

    - by Ricardo Peres
    Well, TechDays 2010 Portugal is over, time for a balance. I really enjoyed being a speaker, although my presentation took a lot more time than it should, it was gratifying to see so many people staying until the end. Lots of subjects were left behind, though. My presentation is available at my SkyDrive, here. Soon I will place there the source code, too. I would like to know if you've been there, and, if so, what do you think of my presentation! Feel free to send your thoughts, whatever they are. On the other hand, I saw some really interesting presentations, to name a few, from Nuno Antunes, Nuno Godinho, Filipe Prezado, Nuno Silva and my friend André Lage. I also had the chance to finally meet Caio Proiete and Pedro Perfeito. Perhaps we'll meet again at TechDays Remix, who knows.

    Read the article

  • Are There Any Examples of Uncle Bob's High-Falutin' Architecture?

    - by Jordan
    I just finished watching this presentation by Uncle Bob (as well as his "Architecture" section of his "Clean Code" videos), but I'm left wondering: Are there any examples out there of applications that implement this Entity-Boundary-Interactor (or Entity-Boundary-Controller) structure? At one point I downloaded the source code to FitNesse (the acceptance testing project he mentions often as an example of not only high test coverage but good architecture, since they were able to defer the decision to not use a database until the very end), and based on a quick glance of it it appears even this project doesn't seem to fit this pattern. Are there any nontrivial examples of this architecture out in the wild, or should I not bother even looking into it and chalk it up as "it would be great if you could get there, but nobody really does"?

    Read the article

  • Loading the 'pktgen' module on Ubuntu Server

    - by StackedCrooked
    I would like to enable and use the pktgen module on Ubuntu Server. I have enabed the module by adding a line containing 'pktgen' to the /etc/modules file. After rebooting it seems that the module is successfully loaded because the directory /proc/net/pktgen exists. However when trying to run the first sample I get these errors: root@ubuntu:~# bash ./pktgen.conf-1-1 Removing all devices Adding eth4 Setting max_before_softirq 10000 Configuring /proc/net/pktgen/eth4 ./pktgen.conf-1-1: line 9: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory ./pktgen.conf-1-1: line 9: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory ./pktgen.conf-1-1: line 9: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory ./pktgen.conf-1-1: line 9: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory ./pktgen.conf-1-1: line 9: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory ./pktgen.conf-1-1: line 9: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory cat: /proc/net/pktgen/eth4: No such file or directory Running... ctrl^C to stop Done It turns out the script simply unable to write a file to the /proc/net/pktgen directory. When I try this manually it fails as well: root@ubuntu:~# cd /proc/net/pktgen/ root@ubuntu:/proc/net/pktgen# touch eth4 touch: cannot touch `eth4': No such file or directory Can anyone help me make it work? I'm using Ubuntu version: 2.6.32-21-server. Fixed I apologize for keeping this post not up to date. I was able to fix it. If I remember well the cause of the error was that eth4 did not exist, or did not have the 'online' status. Anyway, it is fixed now.

    Read the article

  • How to instrument existing ASP.NET application?

    - by jkohlhepp
    We have several highly complex ASP.NET web applications that are used internally by hundreds of users. We are trying to figure out which areas of the applications to invest in to improve functionality, but we aren't sure which screens/features are more heavily used. So, ideally, I'd like to find a way to add a layer of instrumentation to the applications that gathers metrics on which buttons are being clicked, which text boxes are being used, etc. Are there any products / open source apps out there that will do this sort of instrumentation for ASP.NET? Obviously I could do it myself manually by going into the code and injecting logging statements everywhere but this would be a significant amount of work that will be hard to accomplish.

    Read the article

  • BOOTMGR is compressed

    - by JavaAndCSharp
    So I've just finished messing up my computer beyond repair. I originally had a Windows 7 install on my 64GB SSD and all my data on my 1.5TB HDD. So I decided to install Windows 8 Consumer Preview. I tried to clone my 64GB SSD onto my HDD; but that didn't work either at first. I later learned that it wiped all my data off of it. Thank goodness for the cloud. So I retried the cloning after I realized that my data was gone, and I finished it. So I rebooted and got a cryptic error message: BOOTMGR is compressed. Press Ctrl+Alt+Del to restart. So I did. Several times. I followed the instructions to rebuild the bootmanager from many websites, including HowToGeek, smallvoid.com, and more. None worked. BTW, I've lost my Windows 7 Professional DVDs. All I have is my Windows 7 Home Premium DVD. It's the one I got with my laptop. How can I get back to a working computer? I'm willing to wipe anything, as my data is all in the cloud.

    Read the article

  • Google I/O 2010 - Fireside chat with the App Engine team

    Google I/O 2010 - Fireside chat with the App Engine team Google I/O 2010 - Fireside chat with the App Engine team Fireside Chats, App Engine Sean Lynch, Kevin Gibbs, Don Schwarz, Matthew Blain, Guido van Rossum, Max Ross, Brett Slatkin It's been an busy year for the App Engine team with lots of new features and lots of new developers. Come tell us about what you've loved and what still bugs you. With several members of the App Engine team on deck, you'll get the answers to your questions straight from the source. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 6 0 ratings Time: 57:59 More in Science & Technology

    Read the article

  • "A disk read error occurred" when booting XP disk image in VirtualBox

    - by intuited
    I'm trying to boot an XP installation cloned into VirtualBox from a real drive. I'm getting the message A disk read error occurred Press Ctrl+Alt+Del to restart whenever* I try to boot the machine. * This is not strictly true: with AMD-V enabled, the boot process appears to not make it this far and instead hangs at a black screen with cursor. I created the VirtualBox image from the original drive using the following method: $ sudo ddrescue -n /dev/sdd sdd.img logfile # completed without errors $ VBoxManage convertfromraw sdd.img disk.vdi The original disk (and the image) contain a single NTFS partition with XP installed on it. The owner of the drive indicates that it did boot okay the last time the system made it that far. The (Pentium 4) system has a broken (enormous) heat sink, so at some point it failed to boot because it would quickly overheat and shut down. If I boot the VM from a live cd, I am able to mount its /dev/sda1 without any problems. I ran ntfsfix and didn't have any luck. I've read through the instructions on doing this. I didn't really follow them. For example, I didn't run MergeIDE before imaging because the machine was not bootable. However, the symptom of that problem seems to be quite different. The emitted message is contained in the volume boot record of the XP partition, which leads me to suspect that this is a problem with the core operating system bootstrap procedure, and not related to anything in the registry. I don't have an XP boot CD.

    Read the article

  • Can you "swap" the Sysprep answer file in Windows 7

    - by Ben
    I have a load of new Lenovo laptops which I am due to distribute in my company. We are distributed in multiple locations and I want to ship the laptops "boxed" and untouched by IT hand for distribution. We are using LANDesk to do all the software distribution and provisioning, but are currently falling at the first hurdle as when booted, the laptops kick into the Lenovo mini-setup wizard. I assume this is because they have been sysprepped at Lenovo. In order to keep with our (almost) zero touch strategy I want the users to PXE boot into a PE of some sort, which will run a script on startup which replaces the sysprep answer file with one of my own. (i.e. prepopulated with product key, company info etc.) and then reboot to complete Sysprep. The plan is that this will run, and then install the LANDesk agent as a post-sysprep task, which in turn will complete the provisioning. Anyone have any experience / know any pitfalls to look out for / can suggest a suitable, PXE-bootable PE environment? Apologies for the verbosity of the question - it takes a bit of explaining! Thanks in advance, Ben

    Read the article

  • Le futur noyau Linux gèrera le multicoeurs, parmis d'autres innovations

    Mise à jour du 07.04.2010 par Katleen Le futur noyau Linux gèrera le multicoeurs, parmis d'autres innovations On s'active dans le monde de l'open source. Plusieurs contributeurs travaillent sur le futur noyau Linux 2.6.34, qui pourrait permettre de grands pas en avant dans certains domaines. Déjà, pour les processeurs multicoeurs. Arnd Bergmann est en effet en train de faire sauter le verrou géant du kernel. Il n'est pour l'instant pas possible d'effectuer certaines opérations simultanément, ce qui réduit considérablement les performances des machines dotées de processeurs multicoeurs. Son correctif, supprimant les tâches bloquant l'ordinateur, serait prêt à être intégré au noyau Linux. ...

    Read the article

  • Favorite Visual Studio 2010 Extensions, Update

    - by Scott Dorman
    With the release of the Visual Studio Pro Power Tools (and many other new extensions having been released), my list of favorite Visual Studio extensions has changed. All of these extensions are available in the Visual Studio Gallery. Here is the list of extensions that I currently have installed and find useful: Bing Start Page CodeCompare Collapse Selection In Solution Explorer Collapse Solution Color Picker Completion Extension Analyzer Find Results Highlighter Find Results Tweak (Available from CodePlex) Format Document HelpViewerKeywordIndex HighlightMultiWord Image Insertion Indentation Matcher Extension ItalicComments MoveToRegionVSX Numbered Bookmarks PowerCommands for Visual Studio 2010 Regular Expressions Margin Search Work Items for TFS 2010 Source Outliner Spell Checker Structure Adornment This also installs the following extensions: BlockTagger BlockTaggerImpl SettingsStore SettingsStoreImpl StyleCop Team Founder Server Power Tools TFS Auto Shelve Visual Studio Color Theme Editor Visual Studio Pro Power Tools VS10x Code Map VS10x Code Marker VS10x Collapse All Projects VS10x Editor View Enhancer VS10x Insert Debug Names VS10x Selection Popup VS10x Super Copy Paste VSCommands 2010 Word Wrap with Auto-Indent   Technorati Tags: Visual Studio,Extensions

    Read the article

  • How to kill unkillable Python-processes running as root

    - by Andrei
    I am experiencing an annoying problem with sshuttle running it on 10.7.3, MBA with the latest firmware update -- after I stop it (ctrl+c twice), or loose connection, or close the lid, I cannot restore it until I restart the system. The restarting takes notably more time, than it would normally take. I have tried to flush ipfw rules - not helping. Could you advice me how to restore sshuttle connection (without restarting os)? The following processes remain running as root, which I do not know how to kill (tried sudo kill -9 <pid> with no luck): root 14464 python ./main.py python -v -v --firewall 12296 12296 root 14396 python ./main.py python -v -v --firewall 12297 12297 root 14306 python ./main.py python -v -v --firewall 12298 12298 root 3678 python ./main.py python -v -v --firewall 12299 12299 root 2263 python ./main.py python -v -v --firewall 12300 12300 The command I use to run proxy: ./sshuttle --dns -r [email protected] 10.0.0.0/8 -vv The last message I get trying to restore the connection: ... firewall manager: starting transproxy. s: Ready: 1 r=[4] w=[] x=[] s: < channel=0 cmd=PING len=7 s: > channel=0 cmd=PONG len=7 (fullness=554) s: mux wrote: 15/15 s: Waiting: 1 r=[4] w=[] x=[] (fullness=561/0) >> ipfw -q add 12300 check-state ip from any to any >> ipfw -q add 12300 skipto 12301 tcp from any to 127.0.0.0/8 >> ipfw -q add 12300 fwd 127.0.0.1,12300 tcp from any to 10.0.0.0/8 not ipttl 42 keep-state setup >> ipfw -q add 12300 divert 12300 udp from any to 10.0.1.1/32 53 not ipttl 42 >> ipfw -q add 12300 divert 12300 udp from any 12300 to any not ipttl 42 Update: $ ps -ax|grep python 1611 ?? 0:06.49 python ./main.py python -v -v --firewall 12300 12300 48844 ?? 0:00.05 python ./main.py python -v -v --firewall 12299 12299 49538 ttys000 0:00.00 grep python

    Read the article

  • Animations in FBX exported from Maya are anchored in the wrong place

    - by Simon P Stevens
    We are trying to export a model and animation from Maya into Unity3d. In Maya, the model is anchored (pivot point) at the feet (and the body moves up and down). However after we have performed the FBX export, and imported the file into Unity the model is now appears to be anchored by the waist/head and the feet move. These example videos probably help explain the problem more clearly: Example video - Maya - Correct Example video - Unity - Wrong We have also noticed that if we take the FBX file and import it back into Maya we have exactly the same problem. It seems to be that the constraints no longer work after the FBX is reimported back to Maya, which just kills the connection between the joints and the control objects. When we exported the FBX we have tried checking the 'bake animations' check box. The fact that the same problem exist when importing the FBX back into both Maya and Unity suggests that the source of the problem is most likely with the Maya FBX export. Has anyone encountered this problem before and have any ideas how to fix it?

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Ubuntu 12.04 transmission-daemon and zfsonlinux: bad file descriptor and corrupt pieces

    - by Ivailo Karamanolev
    I'm running a Ubuntu 12.04 with zfsonlinux and transmission-daemon. The issues: sporadic Bad File Descriptor and Piece #xxx is corrupt errors. After I recheck the torrent, everything seems fine. That happens only when downloading: once it's in seeding mode. This only happens after the torrent client has been running for some time. I installed zfsonlinux from the offical stable ppa (https://launchpad.net/~zfs-native/+archive/stable). I previously tried running transmission-daemon from the Ubuntu repository, but since I've switched to building the latest transmission from source with the latest libevent (all stable) - same thing. I've seen bug reports (https://trac.transmissionbt.com/ticket/4147) for that issue, but none of them seem to have a solution. How can I fix these errors, or at least understand where they come from and what I can do to rectify the issue?

    Read the article

  • Tuning GlassFish for Production

    - by arungupta
    The GlassFish distribution is optimized for developers and need simple deployment and server configuration changes to provide the performance typically required for production usage. The formal Performance Tuning Guide provides an explanation of capacity planning and tuning tips for application, GlassFish, JVM, and the operating system. The GlassFish Server Control (only with the commercial edition) also comes with Performance Tuner that optimizes the runtime for optimal throughput and scalability. And then there are multiple blogs that provide more insights as well: • Optimizing GlassFish for Production (Diego Silva, Mar 2012) • GlassFish Production Tuning (Vegard Skjefstad, Nov 2011) • GlassFish in Production (Sunny Saxena, Jul 2011) • Putting GlassFish v3 in Production: Essential Surviving Guide (JeanFrancois, Nov 2009) • A GlassFish Tuning Primer (Scott Oaks, Dec 2007) What is your favorite source for GlassFish Performance Tuning ?

    Read the article

  • What are reasons for Unity3D's owners to force rich guys buying Pro version?

    - by mhambra
    Well, I have to say that Unity is a really nice thing that can save one a dozen of hours on coding (letting instantly work on gameplay). But what's the idea of forcing (EULA) any party, which made over 100k last fiscal year, to purchase Pro instead of using normal edition!? It feels that this kind of licensing provides hidden benefits to rich guys over me, poor sloven, who can afford buying $3.5k license but obviously will not receive any additional cookies from it. And, by the way, anyone estimated how much Unity's source + Playstation + Xbox license will cost?

    Read the article

  • Nexus 1000v VEM fails on 2 out of 8 hosts.

    - by cougar694u
    I have 8 ESXi hosts. I do a fresh install from the installable CD directly to 4u1. We have another 2-node cluster with a working Nexus 1000v primary & secondary. Everything's up and running. I installed 6 hosts and everything worked great, migrated them to the Nexus DVS, and VUM installed the modules. I did the 7th host, and when I tried to migrate it to the DVS, it failed with the following error: Cannot complete a Distributed Virtual Switch operation for one or more host memebers. DVS Operation failed on host , error durring the configuration of the host: create dvswitch failed with the following error message: SysinfoException: Node (VSI_NODE_net_create) ; Status(bad0003)= Not found ; Message = Instance(0): Inpute(3) DvsPortset-0 256 cisco_nexus_1000v got (vim.fault.PlatformConfigFault) exception Then, I tried to do host 8, and got the exact same problem. It worked about 15 minutes prior when I did host 6, nothing changed, then went to host 7 and it failed. If I try to remediate either of these two hosts, either patches or extensions, it fails. Anyone else have these problems?

    Read the article

  • How come many project-hosting sites doesn't have a forum feature?

    - by george
    I'm considering starting an open-source project, so I shopped around some popular project hosting sites. What I find surprising is that many (see here for a nice feature table) of the popular project hosting sites (e.g. GitHub, BitBucket) don't have a forum feature, i.e. a place where users can talk to the devs, ask questions, raise ideas, etc. IMHO an active forum is an important factor in creating a user community around a project, so I would expect that most project owners would be interested in such a feature. I've also noticed that some projects do have support forums (or mailing lists) hosted elsewhere - e.g. Ruby on Rails is hosted on GitHub but has a Google Groups support group, and TortoiseHG is hosted on BitBucket but has a mailing list on SourceForge - so it's not like this feature is unneeded. So how come many project hosting sites don't have a forum feature?

    Read the article

  • Analysing and measuring the performance of a .NET application (survey results)

    - by Laila
    Back in December last year, I asked myself: could it be that .NET developers think that you need three days and a PhD to do performance profiling on their code? What if developers are shunning profilers because they perceive them as too complex to use? If so, then what method do they use to measure and analyse the performance of their .NET applications? Do they even care about performance? So, a few weeks ago, I decided to get a 1-minute survey up and running in the hopes that some good, hard data would clear the matter up once and for all. I posted the survey on Simple Talk and got help from a few people to promote it. The survey consisted of 3 simple questions: Amazingly, 533 developers took the time to respond - which means I had enough data to get representative results! So before I go any further, I would like to thank all of you who contributed, because I now have some pretty good answers to the troubling questions I was asking myself. To thank you properly, I thought I would share some of the results with you. First of all, application performance is indeed important to most of you. In fact, performance is an intrinsic part of the development cycle for a good 40% of you, which is much higher than I had anticipated, I have to admit. (I know, "Have a little faith Laila!") When asked what tool you use to measure and analyse application performance, I found that nearly half of the respondents use logging statements, a third use performance counters, and 70% of respondents use a profiler of some sort (a 3rd party performance profilers, the CLR profiler or the Visual Studio profiler). The importance attributed to logging statements did surprise me a little. I am still not sure why somebody would go to the trouble of manually instrumenting code in order to measure its performance, instead of just using a profiler. I personally find the process of annotating code, calculating times from log files, and relating it all back to your source terrifyingly laborious. Not to mention that you then need to remember to turn it all off later! Even when you have logging in place throughout all your code anyway, you still have a fair amount of potentially error-prone calculation to sift through the results; in addition, you'll only get method-level rather than line-level timings, and you won't get timings from any framework or library methods you don't have source for. To top it all, we all know that bottlenecks are rarely where you would expect them to be, so you could be wasting time looking for a performance problem in the wrong place. On the other hand, profilers do all the work for you: they automatically collect the CPU and wall-clock timings, and present the results from method timing all the way down to individual lines of code. Maybe I'm missing a trick. I would love to know about the types of scenarios where you actively prefer to use logging statements. Finally, while a third of the respondents didn't have a strong opinion about code performance profilers, those who had an opinion thought that they were mainly complex to use and time consuming. Three respondents in particular summarised this perfectly: "sometimes, they are rather complex to use, adding an additional time-sink to the process of trying to resolve the existing problem". "they are simple to use, but the results are hard to understand" "Complex to find the more advanced things, easy to find some low hanging fruit". These results confirmed my suspicions: Profilers are seen to be designed for more advanced users who can use them effectively and make sense of the results. I found yet more interesting information when I started comparing samples of "developers for whom performance is an important part of the dev cycle", with those "to whom performance is only looked at in times of crisis", and "developers to whom performance is not important, as long as the app works". See the three graphs below. Sample of developers to whom performance is an important part of the dev cycle: Sample of developers to whom performance is important only in times of crisis: Sample of developers to whom performance is not important, as long as the app works: As you can see, there is a strong correlation between the usage of a profiler and the importance attributed to performance: indeed, the more important performance is to a development team, the more likely they are to use a profiler. In addition, developers to whom performance is an important part of the dev cycle have a higher tendency to use a much wider range of methods for performance measurement and analysis. And, unsurprisingly, the less important performance is, the less varied the methods of measurement are. So all in all, to come back to my random questions: .NET developers do care about performance. Those who care the most use a wider range of performance measurement methods than those who care less. But overall, logging statements, performance counters and third party performance profilers are the performance measurement methods of choice for most developers. Finally, although most of you find code profilers complex to use, those of you who care the most about performance tend to use profilers more than those of you to whom performance is not so important.

    Read the article

  • Ubuntu Laptop Screen does not turn on after sleep

    - by Gage
    I recently put Ubuntu 10.10 on my laptop and after fixing the wireless problem I am only stuck on my screen turning back on after sleep. I have had this problem since Ubuntu 7. I tried using Ubuntu way back then and had a whole bunch of issues with the sleep and the wireless(Broadcom 4311). Anyways, I have an ATI Radeon express 200M graphics card (old laptop). When I go to Hardware drivers it doesn't give me any options to use the closed source drivers. Any suggestions on what I should do? I am going to try what is suggested in this thread but I am at work right now. Laptop does not wake up after sleep Thanks in advance for any help.

    Read the article

  • Version control and project management for freelancing jobs

    - by Groo
    Are there version control and project management tools which "work well" with freelancing jobs, if I want to keep my customer involved in the project at all times? What concerns me is that repository hosting providers have their fees based on the "number of users", which I feel is the number which will constantly increase as I finish one project after another. For each project, for example, I would have to add permissions to my contractor to allow him to pull the source code and collaborate. So how does that work in practice? Do I "remove" the contractor from the project once it's done? This means I basically state that I offer no support and bugfixes anymore. Or do freelances end up paying more and more money for these services? Do you use such online services, or you host them by yourself? Or do you simply send your code to your customer by e-mail in weekly iterations?

    Read the article

  • Are VB.NET to C# converters actually compilers?

    - by Rowan Freeman
    Whenever I see programs or scripts that convert between high-level programming languages they are always labelled as converters. "VB.NET to C# converter" on Google results in expected, useful hits. However "VB.NET to C# compiler" on Google results in things like comparisons between the C# and VB.NET compilers and other hits that are not quite what you'd be looking for. Webopedia defines Compiler as A program that translates source code into object code Eric Lipper in an answer to: "How do I create my own programming language and a compiler for it" suggests: One of the best ways to get started writing a compiler is by writing a high-level-language-to-high-level-language compiler. Is a converter really just a compiler? What separates the two?

    Read the article

  • How to remove request blocking on apache reverse proxy after failure of backend before asking backen

    - by matnagel
    I am working on an apache2 reverse proxy vhost. When the server behind apache is down, the first request to apache shows the error page of course. But at subsequent requests it seems apache delays for some time before asking the backend server again. During all this time (which is short but in development I don't want a delay at all) only the apache error page is shown to the browser, although the backend server is already up. Where is this setting in apache, what is this behaviour, and how can I set the delay time to zero? Edit: I am not trying to change the timeout for a single request. I want to change the blocking time. It is my experience that apache blocks further requests for a certain time before asking a backend server again that has failed once. Edit2: This is what apache delivers: Service Temporarily Unavailable The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.7 with Suhosin-Patch proxy_html/3.0.0 Server at localhost Port 80 After hitting Ctrl-R in firefox for 60 seconds the page finally appears.

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >