Search Results

Search found 21053 results on 843 pages for 'out of process'.

Page 476/843 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Novice prototyping a massive multiplayer webpage based gaming system

    - by Sean Hendlin
    I'm trying to build a website based game in which various pages of the site act as different areas of the game. I am wondering what you would recommended as a design structure. Which languages would be best if building what will hopefully becomes a massive system able to scale to massive amounts of users. I am wondering if and how various elements from differing languages could be meshed to interact with each other. For example could I use html5, javascript, and PHP? What about asp.net how might that factor in? I'm a newbie programmer but I've been working on this idea for years and I want to build it to reality. Your comments and suggestions are appreciated. P.S.: The game is not all graphics and animation (though flash like appearance and some animation would be nice). What I am thinking of is essentially a heavily gamified system of forms. And LOTS of data in many different categories cross referencing each-other. I'm not sure how to go about structuring the collection of data. Also while I know javascript can be used to process some functions, I'm wondering what sort of base system I would need to handle the server side processing of what I am expecting to be some pretty significant algorithm processing. That is to say I expect to have many many many functions and I'm not sure how to mange this using javascript. I feel like they would be forgotten, mixed up, disorganizes as they essentially only exist where they are coded. I guess I need to learn something of libraries? OK, Thank you! Is enough from me for now.

    Read the article

  • Apache configuration to accept all data

    - by ServerDown
    Hi, I have apache running on port 7979 to talk with a device that sends data to webserver and later will run php scripts to process and send reply xml. The problem now is that it sends data like POST HTTP/1.1 Content-Type:text/xml Content-Length:369 Followed by XML When apache sees this it gives a 400 error. Since the device cannot be changed is there any way to accept the full data sent from the device and write to some log? Currently apache simply keeps sending 400 errors back. If there was a way to log the entire xml or create some custom handler for 400 error then the xml could be read by a php script. Looking forward to solutions.

    Read the article

  • Switch or a Dictionary when assigning to new object

    - by KChaloux
    Recently, I've come to prefer mapping 1-1 relationships using Dictionaries instead of Switch statements. I find it to be a little faster to write and easier to mentally process. Unfortunately, when mapping to a new instance of an object, I don't want to define it like this: var fooDict = new Dictionary<int, IBigObject>() { { 0, new Foo() }, // Creates an instance of Foo { 1, new Bar() }, // Creates an instance of Bar { 2, new Baz() } // Creates an instance of Baz } var quux = fooDict[0]; // quux references Foo Given that construct, I've wasted CPU cycles and memory creating 3 objects, doing whatever their constructors might contain, and only ended up using one of them. I also believe that mapping other objects to fooDict[0] in this case will cause them to reference the same thing, rather than creating a new instance of Foo as intended. A solution would be to use a lambda instead: var fooDict = new Dictionary<int, Func<IBigObject>>() { { 0, () => new Foo() }, // Returns a new instance of Foo when invoked { 1, () => new Bar() }, // Ditto Bar { 2, () => new Baz() } // Ditto Baz } var quux = fooDict[0](); // equivalent to saying 'var quux = new Foo();' Is this getting to a point where it's too confusing? It's easy to miss that () on the end. Or is mapping to a function/expression a fairly common practice? The alternative would be to use a switch: IBigObject quux; switch(someInt) { case 0: quux = new Foo(); break; case 1: quux = new Bar(); break; case 2: quux = new Baz(); break; } Which invocation is more acceptable? Dictionary, for faster lookups and fewer keywords (case and break) Switch: More commonly found in code, doesn't require the use of a Func< object for indirection.

    Read the article

  • Cannot terminate proces: access is denied

    - by jao
    Skype and Spotify remain active after I close them. When I try to close them via Task Manager Details End Task, I get the following error: The operation could not be completed. Access is denied. So I have to reboot to get rid of these programs or to log in to Skype again. Also, running a CMD as administrator and executing taskkill /f /im skype.exe results in an Access is denied error. What is going on? (this is Windows 8 RTM x64) update I have to kill skype.exe because it crashed and when I restart skype, I get the following error: It says: Could not open Skype, you are already signed in on this computer. update 2 The process is owned by my own username.

    Read the article

  • non-GUI connection to local Hyper-V VM without network

    - by sandro
    I have a virtual machine on Hyper-V manager (Windows 2008 R2) without a network configured on the VM. From a powershell script running on the host Windows server, I would like to query into the OS of that local VM for certain information (i.e. if a given process has finished completion). I am using codeplex's pshyperv module (https://pshyperv.codeplex.com/) to interact with Hyper-V manager, but the only cmdlet to connect to the vm is 'New-VMConnectSession', which launches a 'vmconnect.exe' connection to the VM. Since vmconnect.exe is essentially RDP, this is not very script-friendly. From within a host's powershell script, is there any way to send a command to a local virtual machine's OS and receive output, if no network is configured on the VM? (I believe Vmware's 'vmrun' utility has this capability) Another way to ask this question: Does Hyper-V have a non-GUI-based form of vmconnect.exe? (PS. Not sure if this was more stackoverflow or serverfault)

    Read the article

  • Creating Parent-Child Relationships in SSRS

    - by Tim Murphy
    As I have been working on SQL Server Reporting Services reports the last couple of weeks I ran into a scenario where I needed to present a parent-child data layout.  It is rare that I have seen a report that was a simple tabular or matrix format and this report continued that trend.  I found that the processes for developing complex SSRS reports aren’t as commonly described as I would have thought.  Below I will layout the process that I went through to create a solution. I started with a List control which will contain the layout of the master (parent) information.  This allows for a main repeating report part.  The dataset for this report should include the data elements needed to be passed to the subreport as parameters.  As you can see the layout is simply text boxes that are bound to the dataset. The next step is to set a row group on the List row.  When the dialog appears select the field that you wish to group your report by.  A good example in this case would be the employee name or ID. Create a second report which becomes the subreport.  The example below has a matrix control.  Create the report as you would any parameter driven document by parameterizing the dataset. Add the subreport to the main report inside the row of the List control.  This can be accomplished by either dragging the report from the solution explorer or inserting a Subreport control and then setting the report name property. The last step is to set the parameters on the subreport.  In this case the subreport has EmpId and ReportYear as parameters.  While some of the documentation on this states that the dialog will automatically detect the child parameters, but this has not been my experience.  You must make sure that the names match exactly.  Tie the name of the parameter to either a field in the dataset or a parameter of the parent report. del.icio.us Tags: SQL Server Reporting Services,SSRS,SQL Server,Subreports

    Read the article

  • How do I install Ubuntu 13.10 from a partition on my Mac?

    - by Barry
    I am trying to install Ubuntu 13.10 on my Macbook Air. I've previously had no issue installing from a USB stick to this machine. However, I don't currently have access to a USB stick or any external media at all! What I've done so far is partitioned my SSD into 3 partitions. One holds OS X, another is a 5gb partition intended for the install ISO, and a third is intended to be the target for that install. The second two partitions are formatted as FAT. I've used dd (with and without bs=1m) to "burn" my ISO to the small 5gb FAT partition. I also at one point tried using hdituil to convert my ISO file to IMG and went through the same process with same result below. After "burning" my ISO to the small partition, I reboot into Refind. Refind sees my small 5gb partition perfectly well, and when I select that partition it loads GRUB appropriately. However, from here, regardless of what I choose, Ubuntu will start to load and then after a few minutes crash out to: BuzyBox V1.15.3 (Ubuntu 1:1.15.3-1ubuntu5) built-in shell (ash) Enter 'help' for a list of built in commands. (initramfs) unable to find a medium containing a live file system. I've Googled this error and found a number of people encountering it when trying to install from USB, but no solutions seem applicable to my case (installing from a partition on my SSD, to another partition on my SSD). Is there any solution to this, or do I just need to wait a few days until I have access to a USB stick? Many thanks in advance, and apologies for length -- I figured I'd err on the side of being exhaustive rather than having people suggest things I've already tried.

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • ODI 11g - Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • What's the fastest and automatic way to transfer 2GB of data between 2 PCs every night?

    - by phan
    While it's fast (less than 2 minutes) I hate having to copy files from PC #1 onto a USB stick, and then manually popping it in PC #2 to copy the files to PC #2. Dropbox is too slow in uploading and then downloading 2GBs (synching), it could take hours. Copying 2GBs over the network is also slow because we're dealing with 10,000 little files that totals 2GBs, and not just one, giant 2gb file. Not sure why, but dealing with 10,000 little files makes the copy process much longer. Is there any other method that I'm missing? Any ideas? I'm using Win7 on both PCs. Edit: These files change every single night.

    Read the article

  • DPKG errors after upgrade to 12.10

    - by James Wulfe
    So I was doing fine then i upgraded my system to 12.10 and now i cant get my system to update all of its packages properly. no matter what i do, cleaning apt cache, manual install using dpkg, etc, i just cant get them to install. what is happening here and how do i fix this. if i would have thought 12.10 would be this much of a hassle i would have never upgraded..... here is a sampling of the code that returns from "apt-get -f install" Preparing to replace usb-modeswitch-data 20120120-0ubuntu1 (using .../usb-modeswitch-data_20120815-1_all.deb) ... /var/lib/dpkg/info/usb-modeswitch-data.prerm: 4: /var/lib/dpkg/info/usb-modeswitch-data.prerm: dpkg-maintscript-helper: Input/output error dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg: trying script from the new package instead ... /var/lib/dpkg/tmp.ci/prerm: 4: /var/lib/dpkg/tmp.ci/prerm: dpkg-maintscript-helper: Input/output error dpkg: error processing /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 2 /var/lib/dpkg/info/usb-modeswitch-data.postinst: 7: /var/lib/dpkg/info/usb-modeswitch-data.postinst: dpkg-maintscript-helper: Input/output error dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/network-manager_0.9.6.0-0ubuntu7_i386.deb /var/cache/apt/archives/pcmciautils_018-8_i386.deb /var/cache/apt/archives/unity-common_6.10.0-0ubuntu2_all.deb /var/cache/apt/archives/whoopsie_0.2.7_i386.deb /var/cache/apt/archives/usb-modeswitch_1.2.3+repack0-1ubuntu3_i386.deb /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) It is also just these 6 packages only. no other packages have given me this kind of trouble. well i should say as of now. It was just 5, but them i got an update for unity, and now unity-common is added to the trouble makers. which prevents me from further upgrading the actual unity package as this package is a dependancy.....

    Read the article

  • needs updated glibc package version 3.4.15 or later for RHEL6

    - by Tejas
    I want to upgrade my current running applications to latest version. But due to some package issue i am unable to install them. I get common error in that: /usr/lib64/libstdc++.so.6: version 'GLIBCXX_3.4.15' not found. When i tried to update glibc package i get following output: [root@agastya ~]# yum install glibc Loaded plugins: refresh-packagekit, rhnplugin epel/metalink | 3.8 kB 00:00 epel | 4.3 kB 00:00 epel/primary_db | 5.0 MB 01:33 epel-testing/metalink | 3.8 kB 00:00 epel-testing | 4.3 kB 00:00 epel-testing/primary_db | 295 kB 00:03 rhel-x86_64-server-6 | 1.8 kB 00:00 rhel-x86_64-server-6/primary | 11 MB 02:02 rhel-x86_64-server-6 8816/8816 Setting up Install Process Package glibc-2.12-1.80.el6_3.6.x86_64 already installed and latest version Nothing to do [root@agastya ~]# Should i need to add some more repositories? If yes, how?

    Read the article

  • Reconciling vmware memory vs windows memory usage

    - by RyanW
    I have a Windows 2008 R2 64 bit virtual machine on ESXi 4.1 host. The host reports that the virtual machine is actively using less than 1 GB of memory. But, in Windows it's reporting the machine is using 7 GB of memory, even though the total of the processes listed in task manager is less than 1 GB. The machine is rather unresponsive and I'm concerned this is impacting other applications (server's purpose is to run ASP.NET state server process, which has been having trouble and led me to spot the memory question). I just noticed High memory usage Windows Server 2008r2 on VMware and will be looking through those documents more, but what is causing this?

    Read the article

  • how to setup sonicwall tz210 to port forward packets received from external ip to another external ip

    - by lplp
    i have a sonicwall tz210 on a fixed ip, say ip1. And then i have, let's say a legacy server, with external ip ip2, which sends data to ip1 (and I have another server on ip1 behind the sonicwall which receives and processes that data). I would like to set up a new server on a different external ip ip3 that will receive and process data from the legacy server. How can I setup the sonicwall so that the packets received from the legacy server (from an external ip) are port forwarded to the external ip address ip3?

    Read the article

  • Oracle BPM overview and roadmap session on Monday, October 1st

    - by Manoj Das
    Bhagat Nainani and I, Manoj Das, will present a session on Oracle BPM overview and road map on Monday, October 1 2012, from 12:15-1:15 PM at Moscone South - 308. Since last OpenWorld, many good things have happened. Many customers have gone live with their BPM 11g deployments, some of whom were nominated for the Innovation Awards. From a product perspective, we delivered 11.1.1.6 and 11.1.1.7 is just around the corner. We will discuss some of the highlights related to both customer successes and product features. In particular, we will present some of the exciting new capabilities that we are introducing in 11.1.1.7 around business analyst driven model-to-execution, more comprehensive unified BPM suite, more flexible and manageable BPM. Another significant development is the release of Process Accelerators. We have not only released accelerators, we have ourselves deployed and are using them internally. We will talk about accelerators as well as our learnings. As the title suggests, we will also share some aspects of our roadmap - there are some very exciting things brewing that I can't wait to share with you on Monday. Hoping to see you on Monday. Again, the session is in Moscone South - 308 from 12:15-1:15. Looking forward to your tweets on the session - remember to use #oraclebpm and #oow. Finally, as always, feel free to ask Bhagat and me any questions you have, during the session as well as after the session.

    Read the article

  • Migrate Thunderbird 3 Saved Searches Between Accounts

    - by UltraNurd
    Long story short, the sysadmins have moved me to a new mailserver. In the process, they needed to create a separate account in Thunderbird and disable my old account. They took care of all of the mail migration. However, my saved search folders didn't go along for the ride. I have over 20 complex searches that I'd rather not have to reenter manually by hand. You can't drag saved searches between accounts like other folders. I tried closing Thunderbird, doing a find/replace in virtualFolders.dat in my Thunderbird profile folder, saving that file, and reopening Thunderbird, but that didn't appear to do anything. I'm assuming the search folders are also saved in one of the sqlite databases... does anyone know where to look?

    Read the article

  • How to Transfer All Your Information to a New PS3

    - by Justin Garrison
    The PlayStation 3 now costs half the price, has double the storage, and uses half the power. If you need another reason to upgrade, Sony also makes it easy to transfer all of your information to a new console. Transferring all of your games, data, and settings is easier than ever, and all you need is an ethernet cable. Read on as we walk you through the whole process of setting up your new PS3 and wiping all your information off the old one. Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Hack Apart a Highlighter to Create UV-Reactive Flowers [Science] Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron Is the Forcefield Really On or Not? [Star Wars Parody Video] Google Updates Picasa Web Albums; Emphasis on Sharing and Showcasing Uwall.tv Turns YouTube into a Video Jukebox Early Morning Sunrise at the Beach Wallpaper

    Read the article

  • Showing content from pages at different URL's (masking), possibly with .htaccess

    - by zigojacko
    If I have URL's like:- domain.com/category/widgets/filter/blue domain.com/category/widgets/filter/red And it is pretty difficult to reconstruct them to something like:- domain.com/category/blue-widgets domain.com/category/red-widgets Is there any way at all that I can use URL rewrites or anything else with .htaccess or on the server to display the URL's as the domain.com/category/blue-widgets on the domain.com/category/widgets/filter/blue page? I've looked into masking URL's but got nowhere and this has been something bugging me for almost 6 months now. Is there any way to achieve what I want to do? FYI: This is a Magento website and the above process, I am wanting to implement for potentially hundreds of URL's. Edit To respond to @kkugelmann's answer:- I couldn't get your proposed RewriteRule to make a difference at all in the .htaccess file so I started testing a few things in this .htaccess tester:- The proposed RewriteRule didn't work in this tester:- However, the following did:- But adding any of these RewriteRule's into the website's .htaccess file did not rewrite the URL at all... Edit2 By the way, if I add [R=301,L] to the end of the URL rewrite rule, it does actually then rewrite the rule, but of course 301 redirects it as well which is unwanted behaviour. Edit3 I found another question with the same issue... And an accepted answer that solved the problem which seemed to be something to do with using mod_proxy and the [P] tag on the rule (if I try this, the page 404's).

    Read the article

  • How to convert dvr-ms file in Ubuntu to DVD?

    - by edmicman
    I have a .dvr-ms file of a recorded TV show from my Vista Media Center. I would like to burn this to a DVD that can play on any standalone DVD player. My main PC that I want to use to convert it to a DVD format is running Ubuntu 10.04. I am able to play the file in Ubuntu using VLC (which surprised me) so I'm assuming I have what I need to decode it. I guess my questions are: What format do I need to convert this file to so that I could burn it to a playable DVD? I started to go through VLC's conversion process and chose I think H264 and AAC or something, and it gave a message about not having an AAC encoder. I'll look into that some more tonight, but is that something I could then burn to a DVD? Thanks for any help!

    Read the article

  • Mouse doesn't work & internet connection not made in Ubuntu 12.04 LTS

    - by David Skare
    Yesterday, Nov 15, 2012, I booted into my Ubuntu 12.04 LTS system. It has resided on a Crucial 128 GB SSD with about 90% free space since early summer. I also have Windows 7 loaded on another Crucial 256 GB SSD. Ubuntu has set up a dual boot system for me even though each OS has its own SSD. I have been using this setup without problems since summer. Yesterday, when the boot process finished, my Microsoft Comfort Mouse 3000 did not work and there was a message that Ubuntu was not connected to the internet. So w/o the mouse I was forced to turn the machine off manually. About 4 days ago Ubuntu worked fine and booting into Win 7 also works fine. I have a backup machine with the same style mouse on it so I swapped the mouse onto this system. Same results. But both mice work when booting into Win 7. Today I removed both SSDs and installed my Ubuntu 12.04 HD which has not been used since I moved Ubuntu to the SSD from it. Same results. Between the last time I used Ubuntu 12.04 on the SSD and when I tried to use it again I made no changes to my machine, either hardware or software. My machines specs are: AMD FX-6100, MSI 990FXA-GD65 AM3+ format with latest BIOS (Ver 19.9), Corsair Vengeance 1866 MHz memory - 16 GB (4GB X 4 sticks), MSI N580GTX video card (nVidia 306.97 drivers), Sony Bravia 32" HD TV as a monitor, Pioneer BluRay DVD-RW, DSL connection to internet thru a router (10 mps), Crucial 128 GB SSD (90% free space), Microsoft Comfort Mouse 3000 I try to maintain current BIOS and drivers for all devices. I mostly use my Ubuntu system for programming in GCC and OpenCOBOL, surfing the internet and e-mailing. No games are installed. I'm stumped! If anyone has experienced this same problem I'd appreciate knowing how you solved it. TIA, Dave

    Read the article

  • Connect to bluetooth device from command line

    - by Ilari Kajaste
    Background: I'm using my bluetooth headset as audio output. I managed to get it working by the long list of instructions on BluetoothHeadset community documentation, and I have automated the process of activating the headset as default audio output into a script, thanks to another question. However, since I use the bluetooth headset with both my phone and computer (and the headset doesn't support two input connections) in order for the phone not to "steal" the connection when handset is turned on, I force the headset into a discovery mode when connecting to the computer (phone gets to connect to it automatically). So even though the headset is paired ok and would in "normal" scenario autoconnect, I have to always use the little bluetooth icon in the notification area to actually connect to my device (see screenshot). What I want to avoid: This GUI for connecting to a known and paired bluetooth device: What I want instead: I'd want to make the bluetooth do exactly what the clicking the connect item in the GUI does, only by using command line. I want to use command line so I can make a single keypress shortcut for the action, and would't need to navigate the GUI every time I want to establish a connection to the device. The question: How can I attempt to connect to a specific, known and paired bluetooth device from command line? Further question: How do I tell if the connection was successful or not?

    Read the article

  • Why would 70-persistent-net.rules have no effect?

    - by Wes Felter
    I've got a saucy server with a lot of NICs and they end up with weird names like "rename19". I know interface names can be changed by modifying the /etc/udev/rules.d/70-persistent-net.rules file. The first clue that something is wrong is that that file did not exist even though it's supposed to be created automatically. So I decided to write my own based on advice from Linux From Scratch: ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.0", NAME="eth0" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.1", NAME="eth1" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.2", NAME="eth2" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.3", NAME="eth3" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:0c:00.0", NAME="mezz0" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:0c:00.1", NAME="mezz1" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:1b:00.0", NAME="slot1a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:1b:00.1", NAME="slot1b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:20:00.0", NAME="slot2a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:20:00.1", NAME="slot2b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:11:00.0", NAME="slot3a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:11:00.1", NAME="slot3b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:8b:00.0", NAME="slot4a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:8b:00.1", NAME="slot4b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:90:00.0", NAME="slot5a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:90:00.1", NAME="slot5b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:95:00.0", NAME="slot6a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:95:00.1", NAME="slot6b" (I'm matching on PCI IDs instead of MAC addresses because I have multiple identical machines that I want to apply this configuration to.) After rebooting, nothing has changed. It's like these rules aren't even being read. There's not much going on in dmesg either: $ dmesg | grep udev [ 3.196629] systemd-udevd[323]: starting version 204 [ 6.719140] systemd-udevd[550]: starting version 204 [ 38.695050] init: udev-fallback-graphics main process (1658) terminated with status 1

    Read the article

  • Cleaning your BizTalk Build Server

    - by Michael Stephenson
    Just a little note for myself this one.At one of my customers where it is still BizTalk 2006 one of the build servers is intermittently getting issues so I wanted to run a script periodically to clean things up a little.  The below script is an example of how you can stop cruise control and all of the biztalk services, then clean the biztalk databases and reset the backup process and then click everything off again.This should keep the server a little cleaner and reduce the number of builds that occasionally fail for adhoc environmental issues.REM Server Clean ScriptREM =================== REM This script is ran to move the build server back to a clean state echo Stop Cruise Controlnet stop CCService echo Stop IISiisreset /stop echo Stop BizTalk Servicesnet stop BTSSvc$<Name of BizTalk Host><Repeat for other BizTalk services> echo Stop SSOnet stop ENTSSO echo Stop SQL Job Agentnet stop SQLSERVERAGENT echo Clean Message Boxsqlcmd -E -d BizTalkMsgBoxDB -Q "Exec bts_CleanupMsgbox"sqlcmd -E -d BizTalkMsgBoxDB -Q "Exec bts_PurgeSubscriptions"  echo Clean Tracking Databasesqlcmd -E -d BizTalkDTADb -Q "Exec dtasp_CleanHMData" echo Reset TDDS Stream Statussqlcmd -E -d BizTalkDTADb -Q "Update TDDS_StreamStatus Set lastSeqNum = 0" echo Force Full Backupsqlcmd -E -d BizTalkMgmtDB -Q "Exec sp_ForceFullBackup" echo Clean Backup Directorydel E:\BtsBackups\*.* /q  echo Start SSOnet start ENTSSO echo Start SQL Job Agentnet start SQLSERVERAGENT echo Start BizTalk Servicesnet start BTSSvc$<Name of BizTalk Host><Repeat for other BizTalk services> echo Start IISiisreset /start echo Start Cruise Controlnet start CCService

    Read the article

  • print jobs are held until the VirtualBox guest OS is reboot

    - by broiyan
    Here is the setup: VirtualBox 4.1.20 (which the Help window describes as 4.1.12_Ubuntu) Extension Pack 4.1.20 (for USB support) Windows 7 Home Premium as a guest operating system on VirtualBox Ubuntu 12.04 with dist-upgrade's to September 2012 as the host operating system. Fuji Xerox DocuPrint P205b, which I believe is a GDI printer, connected via USB. The problem is that often print jobs will sit in the print queue and nothing comes out of the printer. The printer status for the first item in the queue will be Printing even though nothing happens. Then upon rebooting Windows, the print jobs get printed, seemingly simultaneous to the rebooting process; that is as Windows reloads. One way to avoid this problem is to reboot Windows with the printer cable attached, and then submit the print jobs. The print jobs get printed in a timely manner. Perhaps VirtualBox has a problem with USB being plug-n-play and hot pluggable. It's not convenient to have the printer plugged in when Windows boots because: One, this is a laptop, and Two, I may be boot Windows for a purpose other than printing and not anticipate needing to print. Are there any recommendable fixes for this problem?

    Read the article

  • What is the safest way to remove a swap partition?

    - by user212062
    I am running Ubuntu 12.04 on a 64-bit HP laptop with a 16 GB flash drive. I do not have a working hard drive right now. When I installed Ubuntu, I created a 2 GB swap partition on sdb1. I have since learned that swap partitions are generally a bad idea on flash drives, so I would like to use my swap space for my other partitions. You can see my partition scheme in the link below. I have read that I just have to comment sdb1 out of the fstab file, boot from a GParted live CD, select swapoff for sdb1, delete/merge with other partition, and everything's good. But, I've also read that messing with sdb1 can change the UUID of sdb2 or sdb3 and cause problems. Is this true? Does initramfs use swap at all? Also, when I get Ubuntu running on my laptop with an internal hard drive, does the swap partition help that much? I have 6 GB of DDR3. Does the rule of 1.5xActual RAM still apply? It seems like quite a bit to me. Thanks for the help! UPDATE: I have removed swap. The process I followed is: Right click swap partition in GParted and selected swapoff. Used # to comment the swap partition out of fstab. I tried to boot from a live GParted CD, but I kept getting an error, so I ran GParted in Ubuntu. Deleted swap partition in GParted. Unmounted /windows. Expanded /windows to take the remaining space. Mounted /windows. The / and /windows partitions each kept their own names and UUIDs, and everything is running fine. I have never seen any swap space being used before, and I don't intend to use the hibernate function, so I think removing swap was a good idea.

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >