Search Results

Search found 5727 results on 230 pages for 'routed commands'.

Page 140/230 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • Android&ndash;Finding your SDK debug certificate MD5 fingerprint using Keytool

    - by Bill Osuch
    I recently upgraded to a new development machine, which means the certificate used to sign my applications during debug changed. Under most circumstances you’ll never notice a difference, but if you’re developing apps using Google’s Maps API you’ll find that your old API key no longer works with the new certificate fingerprint. Google's instructions walk you through retrieving the MD5 fingerprint of your SDK debug certificate - the certificate that you’re probably signing your apps with before publishing, but it doesn't talk much about the Keytool command. The thing to remember is that Keytool is part of Java, not the Android SDK, so you'll never find it searching through your Android and Eclipse directories. Mine is located in C:\Program Files\Java\jdk1.7.0_02\bin so you should find yours somewhere similar. From a command prompt, navigate to this directory and type: keytool -v -list -keystore "C:/Documents and Settings/<user name>/.android/debug.keystore" That’s assuming the path to your debug certificate is in the typical location. If this doesn’t work, you can find out where it’s located in Eclipse by clicking Window –> Preferences –> Android –> Build. There's no need to use the additional commands shown on Google's page. You'll be prompted for a password, just hit enter. The last line shown, Certificate fingerprint, is the key you'll give Google to generate your new Maps API key. Technorati Tags: Android Mapping

    Read the article

  • Seeking solution for printing-reporting .NET

    - by Parhs
    I am developing an application that prints in separate threads in extreme cases about 20-25 pages per minute to various thermal printers. Currently templates for these are XAML xps documents. All printers have graphics drivers that support EMF/GDI printing. So GDI-EMF is done by operating system resulting in slower performance. Sending raw text for printing is another good solution but doesnt work always , because some clients have old chinese thermal printer that nobody support thus impossible to change codepage / emulation. So it doesnt work always. Also most computers running my software are low end ATOM CPU. So I am thinking to return to GDI, EMF printing and have both Text-Only reports and EMF reports. Another reason i want EMF is because here receipts are signed by Electronic Fiscal Memory device.Most of these dont do good job extracting text from XPS as they dont follow the standard but how windows convert GDI to XPS.Even with text-only mode some of them dont support all character encodings and are impossible to send paper cut command after the sign. I know that using a reporting engine would solve rendering problem but I dont want to buy one. All I want is to be able to show tabular data and insert an image and replaced text.I know there is StringTemplate that could do the generation of template but the problem is i should parse somehow the template and render it using GDI commands. Is there any other solution/approach for this ? Or is there anything ready ?

    Read the article

  • Ubuntu-installer fails preseed configuration file

    - by user76171
    I try to install Ubuntu 12.04 over network unattended. I installed a DHCP server (Dnsmasq), a TFTP server (tftpd-hpa), I got the netboot.tar.gz archive with the pxelinux.0 file, the pxelinux.cfg directory, the linux kernel and the initrd.gz image and I put a preseed file into my web server. Dnsmasq, tftpd-hpa, pxelinux and Apache are all on the same machine. The PCs MB doesn’t support PXE, so I use iPXE and boot it from a CD. The PC gets an IP from the DHCP, then iPXE loads #pxelinux.cfg/default, which I edited like this: timeout 5 prompt 0 default install label install kernel ubuntu-installer/i386/linux append vga=normal locale=en_GB setup/layoutcode=sl_SI console-setup/layoutcode=sl_SI netcfg/choose_interface=auto initrd=ubuntu-installer/i386/initrd.gz netcfg/get_hostname=ubuntux preseed/url=#http://192.168.10.10/ins/preseed.cfg Then it loads the linux kernel and the initrd.gz image. Then I got a question: Detect keyboard layout? I desided to bother with this later. So I answer No, and then twice on Englishjust to get trough and then I get to the error: The installer failed to process the preconfiguration file from #http://192.168.10.10/ins/preseeed.cfg. The file may be corrupt. I created the file myself and copied the d-I commands into it. I also tried to get the preseed.cfg over a web browser and it works fine. So why is the installer failing?

    Read the article

  • Drivers for Ubuntu 13.10 [on hold]

    - by Fernando De Souza Martins
    I just installed Ubuntu 13.10, my screen resolution is not fitting my screen as the ubuntu interface is all around stretching over the screen, so i thought i might install nvidia's driver that i know can let me adjust the exact resolution i need. So i began a 2 hour quest, i downloaded the driver hoping i would have a wizard to instal it, but yeah, so i tried to do a bit of research and i found that feature, i think its called in english additional drivers, but it wont show the nvidia drivers, i tried the terminal, but once i write the commands i found it asks for a password but i cant type anything once the password is asked. So, my question, obviously, how do i install this driver? I am not sure if this is appropriate, but why doesnt ubuntu have a wizard to install things? I feel like im working for the OS, when it should be the other way around, but i love the concept of linux, so im pushing forward and trying to use it. Another thing is, i had to install a bunch of drivers and applications for the drivers in windows, do i need to install any other driver? I cant change my mouse's sensibility in the os, it seems, so how do i do it? I'm sorry i'm asking all of this, but it seems necessary.

    Read the article

  • Apt-get saying "Unable to correct problems, you have held broken packages."

    - by YatharthROCK
    TL;DR: sudo apt-get install ... saying "Unable to correct problems, you have held broken packages." The problem I was trying to get the WebApps feature for PP and QQ following this blog post. I ran the sudo add-apt-repository ppa:webapps/preview command to add the repository, but i got a connection error. Since I know my current ISP gives a shaky connection, I tried again and sure enough, it worked. Then I ran sudo apt-get install unity-webapps-preview, but I realized we had to update apt-get first, so I hit Ctrl + C to stop it. Then I ran sudo apt-get update which worked w/o a fuss, but when I ran sudo apt-get install unity-webapps-preview again later, it showed an error message. Here's the dump: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: unity-webapps-preview : Depends: xul-ext-unity but it is not going to be installed Depends: xul-ext-websites-integration but it is not going to be installed Depends: xul-ext-webaccounts but it is not going to be installed E: Unable to correct problems, you have held broken packages. I think this might be because of me interrupting the earlier command. It hadn't got a chance to output anything, though — I stopped it pretty fast. What I tried I have tried running a number of commands:- sudo apt-get install --fix-broken sudo apt-get autoclean sudo apt-get autoremove sudo apt-get -f install sudoapt-get install ppa-purgeandsudo ppa-purge ppa:webapps/preview` Even after running sudo apt-get upgrade after every try, none of them worked. Research I tried searching Google, looking at a couple of forums and searching on AU, but to no avail. Help would be appreciated.

    Read the article

  • Friday Tips #6, Part 1

    - by Chris Kawalek
    We have a two parter this week, with this post focusing on desktop virtualization and the next one on server virtualization. Question: Why would I use the Oracle Secure Global Desktop Secure Gateway? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Well, for the benefit of those who might not be familiar with client connections in Oracle Secure Global Desktop (SGD), let me back up and briefly explain. An SGD client connects to an SGD server using two distinct protocols, which, by default, require two distinct TCP ports. The first is the HTTP protocol, used by the web browser to connect to the SGD webserver on TCP port 80, or if secure connections are enabled (SSL/TLS), then TCP port 443, commonly identified as the "HTTPS" port, that is, "SSL encrypted HTTP." The second protocol from the client to the server is the Adaptive Internet Protocol, or AIP, which is used for displaying applications, transferring drive mapping data, print jobs, and so on. By default, AIP uses the TCP port 3104, or port 5307 when SSL is enabled. When SGD clients need to access SGD over a firewall, the ports that AIP requires are typically "closed"; and most administrators are reluctant, to put it mildly, to change their firewall configurations to allow AIP traffic on 3144/5307.   To avoid this problem, SGD introduced "Firewall Forwarding", a technique where, in effect, both http and AIP traffic are "multiplexed" onto a single "well-known" TCP port, that is port 443, the https port.  This is also known as single-port firewall traversal.  This technique takes advantage of the fact that, as a "well-known service", port 443 is usually "open",   allowing (encrypted) traffic to pass. At the target SGD server, the two protocols are de-multiplexed and routed appropriately. The Secure Gateway was developed in response to requirements from customers for SGD to support multi-stage DMZ's, and to avoid exposing SGD servers and the information they contain directly to connections from the Internet. The Secure Gateway acts as a reverse-proxy in the first-tier of the DMZ, accepting, authenticating, and terminating incoming client connections, and then re-encrypting the connections, and proxying them, routing them on to SGD servers, deeper in the network. The client no longer needs to know the name/IP address of the SGD servers in their network, they connect to the gateway, only. The gateway takes care of those internal network details.     The Secure Gateway supports the same "single-port firewall" capability as does "Firewall Forwarding", but offers the additional advantage of load-balancing incoming client connections amongst SGD array members, which could be cumbersome without a forward-deployed secure gateway. Load-balancing weights and policies can be monitored and tuned using the "Balancer Manager" application, and Apache mod_proxy_balancer directives.   Going forward, our architects recommend the use of the Secure Gateway over "Firewall Forwarding" for single-port firewall traversal, due to its architectural advantages, its greater flexibility and enhanced features.  Finally, it should be noted that the Secure Gateway is not separately priced; any licensed SGD customer may use the Secure Gateway component at no additional cost.   For more information, see the "Secure Gateway Administrator's Guide".

    Read the article

  • Upgrade from Linux Mint 12 to Kubuntu 12.04?

    - by MountainX
    Is there an "easy" way to "upgrade" my existing Linux Mint 12 install to Kubuntu 12.04 beta 2? I know I could reinstall. Usually I would do a clean install to avoid unexpected issues. But in this case, I don't have time to reconfigure everything from my printers to my installed software, so I am looking for the quick/easy way, but I also want to avoid big risks of an upgrade gone wrong. I'm hoping to just change some repos and run a few commands from the terminal. I don't mind editing a few config files as long as I can find good HOWTOs. But I don't want to be the pioneer (arrows in back). I'm hoping someone has done this before and has a set of steps. For context, I recently installed KDE 4.8 SC onto Kubuntu 11.10 using PPAs. This was on another computer. That wasn't a problem. But I decided to do a fresh install of Kubuntu 12.04 later. I like it well enough that I want to change my other computer from Linux Mint 12 to Kubuntu. (I'm going all-in with KDE. It's now my desktop of choice.) This Linux Mint upgrade will be a move from Gnome and MGSE to KDE, so that will probably complicate things at bit compared to something like upgrading Kubuntu 11.10 to KDE 4.8. References: http://www.psychocats.net/ubuntu/kde Is it safe to install Kubuntu-desktop in 11.10?

    Read the article

  • How do I restore the original color scheme, icons, and theme?

    - by katya sehgal
    I'd like the original colour scheme, icon style of 12.04. I somehow lost the Ambiance theme (possible error or upgrade error). I re-installed 'light-themes' from the terminal and got it back. But the panel on the top that shows the options of sound, battery and wi-fi has changed and I can-not get the original setting back. In the windows, the close, minimize tools have shifted to the right instead of the original left side. I had installed MyUnity and Ubuntu Tweak but deleted them. As such, I want the original setting back. Kindly help me with the commands. I have searched for solutions; there are multiple and I need to be sure if I should follow the same. Kindly bear before marking duplicate. Discoveries: The appearance is gray and boxy as outlined here. Not sure same problem. Similar 'gray and boxy' article here. Desktop forgets theme. I have also tried the unity --reset command. It never completes. I gave it 20 minutes.

    Read the article

  • RPi and Java Embedded GPIO: Java code to blink more LEDs

    - by hinkmond
    Now, it's time to blink the other GPIO ports with the other LEDs connected to them. This is easy using Java Embedded, since the Java programming language is powerful and flexible. Embedded developers are not used to this, since the C programming language is more popular but less easy to develop in. We just need to use a dynamic Java String array to map to the pinouts of the GPIO port names from the previous diagram posted. This way we can address each "channel" with an index into that String array. static String[] GpioChannels = { "0", "1", "4", "17", "21", "22", "10", "9" }; With this new dynamic array, we can streamline the main() of this Java program to activate all the ports. /** * @param args the command line arguments */ public static void main(String[] args) { FileWriter[] commandChannels; try { /*** Init GPIO port for output ***/ // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); for (String gpioChannel : GpioChannels) { System.out.println(gpioChannel); // Reset the port unexportFile.write(gpioChannel); unexportFile.flush(); // Set the port for use exportFile.write(gpioChannel); exportFile.flush(); // Open file handle to port input/output control FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio" + gpioChannel + "/direction"); // Set port for output directionFile.write(GPIO_OUT); directionFile.flush(); } And, then simply add array code to where we blink the LED to make it blink all the LEDS on and off at once. /*** Send commands to GPIO port ***/ commandChannels = new FileWriter[GpioChannels.length]; for (int channum=0; channum It's easier than falling off a log... or at least easier than C programming. Hinkmond

    Read the article

  • Which language is more suitable heavy file tasks?

    - by All
    I need to write a script (based on basic functions) to process /image/audio/video files. The process is mainly filesystem tasks and converts. The database of files has been stored by mysql. The script is simple but cause heavy tasks on the system; for example renaming/converting/copying thousands of file in a run. The script does not read the content of files into memory, it just manage the commands for sub-processes. The main weight is on the communication with filesystem. The script will be used regularly for new files. My concern is about performance. I am thinking of Shell script a complied language like C Please advise which programming language is more suitable for this purpose and why? UPDATE: An example is to scan a folder for images, convert them with ImageMagick, move files to destination folder, get file info, then update the database. As you can see, the process has no room for optimization, and most of languages have similar APIs for popular programs like ImageMagick, MySQL, etc. Thus, it can be written in any language. I just wish to reduce resource usage by speeding up the long loop. NOTE: I know that questions about comparing languages are not favorable, but I really had problem to choose, because the problems can appear in action.

    Read the article

  • Script/tool to import series of snapshots, each being a new revision, into Subversion, populating source tree?

    - by Rob
    I've developed code locally and taken a fairly regular snapshot whenever I reach a significant point in development, e.g. a working build. So I have a long-ish list of about 40 folders, each folder being a snapshot e.g. in ascending date YYYYMMDD order, e.g.:- 20100523 20100614 20100721 20100722 20100809 20100901 20101001 20101003 20101104 20101119 20101203 20101218 20110102 I'm looking for a script to import each of these snapshots as a new subversion revision to the source tree. The end result being that the HEAD revision is the same as the last snapshot, and other revisions are as numbered. Some other requirements: that the HEAD revision is not cumulative of the previous snapshots, i.e., files that appeared in older snapshots but which don't appear in later ones (e.g. due to refactoring etc.) should not appear in the HEAD revision. meanwhile, there should be continuity between files that do persist between snapshots. Subversion should know that there are previous versions of these files and not treat them as brand new files within each revision. Some background about my aim: I need to formally revision control this work rather than keep local private snapshot copies. I plan to release this work as open source, so version controlling would be highly recommended I am evaluating some of the current popular version control systems (Subversion and GIT) BUT I definitely need a working solution in Subversion. I'm not looking to be persuaded to use one particular tool, I need a solution for each tool I am considering as I would also like a solution in GIT (I will post an answer separately for GIT so separate camps of folks who have expertise in GIT and Subversion will be able to give focused answers on one or the other). The same question but for GIT: Script/tool to import series of snapshots, each being a new edition, into GIT, populating source tree? An outline answer for Subversion in stackoverflow.com but not enough specifics about the script: what commands to use, code to check valid scenarios if necessary - i.e. a working script basically. http://stackoverflow.com/questions/2203818/is-there-anyway-to-import-xcode-snapshots-into-a-new-svn-repository

    Read the article

  • SFTP permission denied on files owned by www-data

    - by Charles Roper
    I have a pretty standard server set up running Apache and PHP. An app I am running creates files and these are owned by the Apache user www-data. Files that I upload via SFTP are owned by my own user charlesr. All files are part of the www-data group. My problem is that I cannot modify or overwrite any of the files via SFTP which are owned by www-data, even though charlesr is part of the www-data group. I can modify the files no problem via a SSH session. So I'm not sure what to do. How do I give my SFTP session permissions to modify www-data owned files? For a bit of background, these are the notes I wrote for myself when setting-up the server: Now set up permissions on `/var/www` where your files are served from by default: $ sudo adduser $USER www-data $ sudo chgrp -R www-data /var/www $ sudo chmod -R g+rw /var/www $ sudo chmod -R g+s /var/www Now log out and log in again to make the changes take hold. The previous set of commands does the following: 1. adds the current user ($USER) to the `www-data` group; 2. changes `/var/www` to belong to the `www-data` group; 3. adds read/write permissions to the group that `/var/www` belongs to; 4. sets the SGID bit on `/var/www`; this final point bears some explaining. And then I go on to explain to myself what setting the SGID bit means (i.e. all files created in /var/www become part of the www-data group automatically). Btw, nothing feels sweeter than going back and reading your own detailed notes on the what, how and why of your own server set up when trying to troubleshoot like this - I recommend it highly to all beginners like myself :-)

    Read the article

  • apt-get failed install of libg15, all package management is failing

    - by Stifle
    I was trying to get my Logitech G510 keyboard's back-lights working so I went into the Synaptic Package Manager and marked LibG15, G15daemon, and all the other associated packages. Synaptic reported a failed install. Now all Package management is failing due to libg15 being "halfway installed." Some commands I have tried to fix the problem follow. . . root@bt:~# apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done E: The package libg15 needs to be reinstalled, but I can't find an archive for it. root@bt:~# sudo apt-get autoremove Reading package lists... Done Building dependency tree Reading state information... Done E: The package libg15 needs to be reinstalled, but I can't find an archive for it. root@bt:~# sudo apt-get -f purge libg15 Reading package lists... Done Building dependency tree Reading state information... Done E: The package libg15 needs to be reinstalled, but I can't find an archive for it. root@bt:~# sudo dpkg --configure -a dpkg: dependency problems prevent configuration of g15macro: g15macro depends on g15daemon; however: Package g15daemon is not configured yet. dpkg: error processing g15macro (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of g15stats: g15stats depends on g15daemon; however: Package g15daemon is not configured yet. dpkg: error processing g15stats (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: g15macro g15stats I'm not too computer savvy. Any help would be much appreciated!!! Note: I'm using Ubuntu 10.04 under Backtrack 5 R3.

    Read the article

  • Ubuntu 12.04.1 LTS -touch pad scroll for asus-k55v not working

    - by Aks
    have tried all the commands over terminal but still could bot fix it, for xinput i got Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? MOSART Semi. 2.4G Keyboard Mouse id=11 [slave pointer (2)] ? ? PS/2 Generic Mouse id=15 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Video Bus id=8 [slave keyboard (3)] ? Sleep Button id=9 [slave keyboard (3)] ? MOSART Semi. 2.4G Keyboard Mouse id=10 [slave keyboard (3)] ? USB 2.0 UVC HD Webcam id=12 [slave keyboard (3)] ? Asus WMI hotkeys id=13 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=14 [slave keyboard (3)] none of the post cud help to fix it :( help

    Read the article

  • From TFS to Git

    - by Saeed Neamati
    I'm a .NET developer and I've used TFS (team foundation server) as my source control software many times. Good features of TFS are: Good integration with Visual Studio (so I do almost everything visually; no console commands) Easy check-out, check-in process Easy merging and conflict resolution Easy automated builds Branching Now, I want to use Git as the backbone, repository, and source control of my open source projects. My projects are in C#, JavaScript, or PHP language with MySQL, or SQL Server databases as the storage mechanism. I just used github.com's help for this purpose and I created a profile there, and downloaded a GUI for Git. Up to this part was so easy. But I'm almost stuck at going along any further. I just want to do some simple (really simple) operations, including: Creating a project on Git and mapping it to a folder on my laptop Checking out/checking in files and folders Resolving conflicts That's all I need to do now. But it seems that the GUI is not that user friendly. I expect the GUI to have a Connect To... or something like that, and then I expect a list of projects to be shown, and when I choose one, I expect to see the list of files and folders of that project, just like exploring your TFS project in Visual Studio. Then I want to be able to right click a file and select check-in... or check-out and stuff like that. Do I expect much? What should I do to easily use Git just like TFS? What am I missing here?

    Read the article

  • Documenting your database with Visual Studio 2012 SSDT tools

    - by krislankford
    The title of this post is interesting and something I am wishing you and your colleagues had a better way to do. I understand as I am asked this question frequently. I couple of weeks ago I was asked the same question by a customer who documents their database using the ApexSQL Doc tools which uses the extended properties on objects to create automated documentation. I thought that was super interesting and went down the path to see how we could could support the creation of this documentation while leveraging the Visual Studio 2012 SSDT Tools. What I found is was rather intriguing. There is a property called “Description” on all objects in the SSDT tools. This property is rather subtle and I am betting overlooked. To be honest, this property has probably been there for a while and I just never discovered it. Adding text to this '”Description” property it allows Visual Studio to create the commands for the extended properties directly to your schema which should be version controlled. As I did more digging there seemed to be extended properties at every level in the SQL database objects. This fills some rather challenging gaps and allows organizations to manage SQL Schema using the Visual Studio SQL database tools while allowing a way to automatically document the database. This will also work in the automation of the creation and alter scripts that can be generated as part f an automated build system. Now we essentially get a way to store, build and document the database in a nice little ALM package. Happy Coding!

    Read the article

  • XF86 keybinds in Openbox

    - by vasa1
    Lubuntu uses Openbox as its window manager. ~/.config/openbox/lubuntu-rc.xml is a file that specifies, among other things, keybinds for various commands. Most of the keybinds in lubuntu-rc.xml use modifier keys such as Control, Shift, Alt, and Super. For example, one way of opening a terminal window would be by pressing Control+Alt+T together: <!-- Launch a terminal on Ctrl + Alt + T--> <keybind key="C-A-T"> <action name="Execute"> <command>lxsession-default terminal</command> </action> </keybind> But there is also this: <!-- Keybinding for terminal button--> <keybind key="XF86WWW"> <action name="Execute"> <command>lxsession-default terminal</command> </action> </keybind> <keybind key="XF86Terminal"> <action name="Execute"> <command>lxsession-default terminal</command> </action> </keybind> What are keybind key="XF86WWW" and keybind key="XF86Terminal"? How do I locate these keys on my laptop's keyboard? My laptop is a Dell Inspiron N 1545 from 2008.

    Read the article

  • How do I get Graphics drivers / bluetooth / card reader working on an Acer Aspire V3-571G?

    - by Adam
    A couple of days ago I bought an Acer Aspire V3-571G laptop without a system installed on it. The only thing that was there was Linux Linpus. I created a bootable CD with Ubuntu 12.04 64-bit - I read that my processor was 64 bit and that it might be a good configuration for my gear (I'm not especially fluent with all the computer stuff, still trying to learn) and replaced Linpus with Ubuntu. Everything seemed to work fine, but there're few exceptions to that which came pass my way. My bluetooth doesn't work. It seems to be switched on, but when I check my system settings the button is actually off, and I can't drag it 'perminently' to the 'on' position. Tried a couple of commands I found on the net, none of them helped and there was no word whatsoever in my BIOS settings about enabling bluetooth. My card reader has some serious problems with copying more than one file at a time. I tried to put some music on my phone through a MicroSD card adapter (because my bluetooth doesn't work) and it got stuck every single time I copied an album on it. I'm not sure if all my drivers were properly installed, so I checked in the terminal if it could tell me sth about my graphics. typed: sudo lshw -c display and what i got was: *-display UNCLAIMED description: VGA compatible controller product: NVIDIA Corporation vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller cap_list configuration: latency=0 resources: memory:b2000000-b2ffffff memory:a0000000-afffffff memory:b0000000-b1ffffff ioport:2000(size=128) *-display description: VGA compatible controller product: Ivy Bridge Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:44 memory:b3000000-b33fffff memory:c0000000-cfffffff ioport:3000(size=64) As I said I'm no expert and not english-speaking generally, but it doesn't seem to be right. I've got a NVIDIA GeForce GT 640M.

    Read the article

  • BizTalk: Instance Subscription: Details

    - by Leonid Ganeline
    It has interesting behavior and it is not always what we are waiting for. An orchestration can be enlisted with many subscriptions. In other word it can have several Receive shapes. Usually the first Receive uses the Activation subscription but other Receives create the Instance subscriptions. [See “Publish and Subscribe Architecture” in MSDN] Here is a sample process. This orchestration has two receives. It is a typical Sequential Convoy. [See "BizTalk Server 2004 Convoy Deep Dive" in MSDN by Stephen W. Thomas]. Let's experiment started.   There are three typical scenarios. First scenario: everything is OK Activation subscription for the Sample message is created when the orchestration the SampleProcess is enlisted. The Instance subscription is created only when the SampleProcess orchestration instance is started and it is removed when the orchestration instance is ended. So far so good, the Message_2 was delivered exactly in this time interval and was consumed. Second scenario: no consumers Three Sample_2 messages were delivered. One was delivered before the SampleProcess was started and before the instance subscription was created. Second message was delivered in the correct time interval. The third one was delivered after the SampleProcess orchestration was ended and the instance subscription was removed. Note: ·         It was not the first Sample_2 was consumed. It was first in the queue but in was not waiting, it was suspended when it was delivered to the Message Box and didn’t have any subscribers at this moment. The first and the last Sample_2 messages were Suspended (Nonresumable) in the Message Box. For each of this message we have got two (!) service instances associated with this suspended message. One service instance has the ServiceClass of Messaging, and we can see its Error Description:   The second service instance has the ServiceClass of RoutingFailureReport, and we can see its Error Description:   Third scenario: something goes wrong Two Sample_2 messages were delivered. Both were delivered in the same interval when the SampleProcess orchestration was working and the instance subscription was created and was working too. First Sample_2 was consumed. The second Sample_2 has the subscription but the subscriber, the SampleProcess orchestration, will not consume it. After the SampleProcess orchestration is ended (And only after! I will discuss this in the next article.), it is suspended (Nonresumable). In this time only one service instance associated with this kind of scenario is suspended. This service instance has the ServiceClass of Orchestration, and we can see its Error Description: In the Message tab we will see the Sample_2 message in the Suspended (Resumable) status. Note: ·         This behavior looks ambiguous. We see here the orchestration consumes the extra message(s) and gets suspended together with those extra messages. These messages are not consumed in term of “processed by orchestration”. But they are consumed in term of the “delivered to the subscriber”. The receive shape in the orchestration is not received these extra messages. But these messages are routed to the orchestration.     Unified Sequential convoy  Now one more scenario. It is the unified sequential convoy. That means the activation subscription is for the same message type as it for the instance subscription. The Sample_2 message is now the Sample message. For simplicity the SampleProcess orchestration consumes only two Sample messages. Usually the orchestration consumes a lot of messages inside loop, but now it is only two of them. First message starts the orchestration, the second message goes inside this orchestration. Then the next pair of messages follows, and so on. But if the input messages follow in shorter intervals we have got the problem. We lost messages in unpredictable manner. Note: ·         Maybe the better behavior would be if the orchestration removes the instance subscription after the message is consumed, not in the end on the orchestration. Right now it is a “feature” of the BizTalk subscription mechanism.

    Read the article

  • My laptop with Linux/ Ubuntu isn't working

    - by Andy Campos
    I have a dell laptop with ubuntu linux. A day I tried to start it up and a black screen just appeared that says: GNU GRUB version1.98+20100804-5ubuntu3 (and these clickable options:) -Ubuntu, with Linux 2.6.35-22-generic -Ubuntu, with Linux 2.6.35-22-generic (recovery mode) -Memory test (memtest86+) -Memory test (memtest86+, serial console 115200) When I click the first one, a bunch of text appears like: mount: mounting /dev/disk/by-uuid/8396a225... failed: invalid argument mount: mounting /dev on /root/dev failed: no such file or directory mount: mounting /sys on /root/sys failed: no such file or directory mount: mounting /proc on /root/proc failed: no such file or directory Target file system doesn't have requested /sbin/init No init found. Try passing init= bootarg Enter 'help' for a list of built-in commands BusyBox v1.15.3 (Ubuntu 1:1.15.3-1ubuntu5) built-in shell (ash) (initramfs) When I enter 'help' a bunch more incomprehensible text appears. Whenever I press the enter key all that pops up is (intetramfs) If anyone can make rhyme or reason out of this please, please help me out so it can boot up normally and i can be set. If there's some kind of special code I have to type in or something I know nothing about computers.

    Read the article

  • How do they keep track of the NPCs in Left 4 Dead?

    - by f20k
    How do they keep track of the NPC zombies in Left 4 Dead? I am talking about the NPCs that just walk into walls or wander around aimlessly. Even though the players cannot see them, they are there (say inside rooms or behind doors). Let's say there's about 10 or so zombies in a hallway and inside rooms. Does the game keep all of those zombies in a list and iterate through giving them commands? Do they just spawn when the user is within a certain radius or reached a special location? Say you placed the 4 units (controlled by players) on completely different places throughout the map. Let's assume you aren't being swarmed and then you have not killed any of these aimless NPCs. Would the game be keeping track of 10 x 4 = 40 zombies in total? Or is my understanding completely off? The reason I ask is if I were to implement something similar on a mobile device, keeping track of 40 or more NPCs might not be such a great idea.

    Read the article

  • Failed to fetch *.deb Size mismatch, then packages with unmet dependencies [solved]

    - by user113907
    I recently bought the wonderfully looking and reviewed Amnesia The Dark Descent and I'm trying to install it. The first time I tried to download it, I had to stop in the middle of the download (may have broken something). The second time I tried to download, at the end of the download it gave me the following error: Failed to fetch https://private-ppa.launchpad.net/commercial-ppa-uploaders/amnesia/ubuntu/pool/main/a/amnesia/amnesia_1.2.1-0ubuntu2_i386.deb Size mismatch Now, whenever I try to download it, it gives me this error: The following packages have unmet dependencies: amnesia: Depends: libalut0 (>= 1.0.1) but it is not going to be installed Depends: libc6 (>= 2.4) but 2.15-0ubuntu10.3 is to be installed Depends: libfontconfig1 (>= 2.8.0) but 2.8.0-3ubuntu9.1 is to be installed Depends: libfreetype6 (>= 2.2.1) but 2.4.8-1ubuntu2 is to be installed Depends: libgcc1 (>= 1:4.1.1) but 1:4.6.3-1ubuntu5 is to be installed Depends: libopenal1 (>= 1:1.13) but 1:1.13-4ubuntu3 is to be installed Depends: libsdl1.2debian (>= 1.2.10-1) but 1.2.14-6.4ubuntu3 is to be installed Depends: libstdc++6 (>= 4.1.1) but 4.6.3-1ubuntu5 is to be installed Depends: libxft2 (> 2.1.1) but 2.2.0-3ubuntu2 is to be installed Depends: zlib1g (>= 1:1.1.4) but 1:1.2.3.4.dfsg-3ubuntu4 is to be installed I already searched the net and ran a few command line commands. Ex: sudo dpkg --configure -a sudo apt-get install -f Or configure the software packages to download from Main instead of the local UK server. But I'm really not figuring out a solution. I have a fresh install of the latest LTS (12.04). The only non-standard thing so far is that I installed gnome-shell (?) because I really can't stand Unity. Help would be much appreciated. I am currently more than entertained enough with World of Goo and Command & Conquer, but I will want to play Amnesia in the close future.

    Read the article

  • Running multiple box2D world objects on a server

    - by CharbelAbdo
    I'm creating a multiplayer game using LibGdx (with Box2d) and Kryonet. Since this is the first time I work on multiplayer games, I read a bit about server - client implementations, and it turns out that the server should handle important tasks like collision detection, hits, characters dying etc... Based on some articles (like the excellent Gabriel Gambetta Fast paced multiplayer series), I also know that the client should work in parallel to avoid the lag while the server responds to commands. Physics wise, each game will have 2 players, and any projectiles fired. What I'm thinking of doing is the following: Create a physics world on the client When the game is signaled to start, I create the same physics world on the server (without any rendering obviously). Whenever the player issues a command (move or fire), I send the command to the server and immediately start processing it on the client. When the server receives the command, it applies it on the server's world (set velocity etc...) Each 100ms, the server sends the new state to the client which corrects what was calculated locally. Any critical action (hit, death, level up) is calculated only on the server and sent to the client. Essentially, I would have a Box2d World object running on the server for each game in progress, in sync with the worlds running on the clients. The alternative would be to do my own calculations on the server instead of relying on Box2D to do them for me, but I'm trying to avoid that. My question is: Is it wise to have, for example, 1000 instances of the World object running and executing steps on the server? Tomcat used around 750 MBytes of memory when trying it without any object added to the world. Anybody tried that before? If not, is there any alternative? Google did not help me, are there any guidelines to use when you want to have physics on both the client and the server? Thanks for any help.

    Read the article

  • Proof Identify stolen computer getting computer identification info from Launchpad bugs and comparing

    - by Kangarooo
    I sold my old laptop to neighbours and it was stolen from them. Well i think i have found thief so i want to check his computer id and compare it to my old Launchpad bugs id. How in Launchpad i can find from my bugs: Motherboard HDD Somthing else that can help identify it Maybe how to recover or find some overwritten files (couse now there is windows) I found in Launchpad one my bugs has LSPCI autogenerated from bug 682846 https://launchpadlibrarian.net/70611231/Lspci.txt but i dont see any id that can be used to identify specificly my comp. This can be used to identify many same models. Or i missed something in there? And what commands should i use to get all identification on that comp in one go fast? Just lspci? How to get same lspci as it is in that Launchpad link? Now testing laspci on my computer i dont get so much info. Also im now doing a search in my external hdd where i have many backups and maybe i have there result from lspci. So what containing keywords would help doing search with for small lspci and full reports ive done? I might have done sudo lshw somefilename

    Read the article

  • Compiling C++ code with mingw under 12.04

    - by golemit
    I tried to setting up compiling of the C++ projects under my Ubuntu 12.04 by mingw with QT libraries. The idea was to get executable independent from variations of target Windows versions and development environments of my colleagues. It was successfully implemented under OpenSuse 12.2 with mingw32 and some additional libraries including mingw32-libqt4 and some others. Fine. However when trying to do the same under Ubuntu 12.04 with mingw-w64 including latest libraries QT-4.8.3 copied from Windows there were always errors. No luck. The typical errors in these attempts can be seen in attachments. The commands used: qmake -spec /path_to_my_conf/win32-x-g++ my_project.pro make Can someone give a hint of the problem source? I would appreciate a good advice. Serge some exctracts from LOG: ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0xec): undefined reference to `QDialog::accept()' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0xf0): undefined reference to `QDialog::reject()' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x104): undefined reference to `non-virtual thunk to QWidget::devType() const' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x108): undefined reference to `non-virtual thunk to QWidget::paintEngine() const' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x10c): undefined reference to `non-virtual thunk to QWidget::getDC() const' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x110): undefined reference to `non-virtual thunk to QWidget::releaseDC(HDC__*) const' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x114): undefined reference to `non-virtual thunk to QWidget::metric(QPaintDevice::PaintDeviceMetric) const' ./.obj/qrc_images.o:qrc_images.cpp:(.text+0x24): undefined reference to `__imp___Z21qRegisterResourceDataiPKhS0_S0_' ./.obj/qrc_images.o:qrc_images.cpp:(.text+0x64): undefined reference to `__imp___Z23qUnregisterResourceDataiPKhS0_S0_' collect2: ld returned 1 exit status

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >