Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 471/672 | < Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >

  • Resolution changes when using switch

    - by Edward D
    So, the "real" resolution of my monitor is 1024x768. That's what I'd use on my docked Windows laptop, and what I'd use on my Xubuntu desktop connected directly. When I connect a switch, to switch between the two, however, the ubuntu machine's resolution changes. Everything's still proportional, and it still thinks it's doing 1024x768, but the icons and fonts appear larger. Not 800x600 larger, but still big. When I used Xubuntu Precise, I created an xorg.conf file to set a resolution of 1280x1024 which made it look the way it does without the switch ... as a workaround. When I upgraded to Trusty, I lost this. I tried to re-create it, but doesn't seem to load my file. Ideally, I'd like to correct the original problem, but I'd settle for being able to up the resolution. I searched for a while, and tried to do it, but I'm giving up ... please help me out. Controller: Intel Corporation 82915G/P/GV/GL/PL/910GL Memory Controller Hub (rev 04) Monitor: NEC MultiSync LCD 1850e http://www.necdisplay.com/documents/UserManuals/LCD1850E_manual.pdf OS: Xubuntu 14.04 Trusty /etc/X11/xorg.conf: # YOU CREATED THIS FILE # sudo leafpad /etc/X11/xorg.conf Section "Device" Identifier "Configured Video Device" EndSection Section "Monitor" Identifier "NEC LCD1850E" # I found Synchronization Range at: # http://www.necdisplay.com/documents/UserManuals/LCD1850E_manual.pdf HorizSync 31.0-82.0 VertRefresh 55.0-85.0 EndSection Section "Screen" Identifier "Default Screen" Monitor "NEC LCD1850E" Device "Configured Video Device" SubSection "Display" Depth 24 Modes "1280x1024" "1280x960" "1024x768" "800x600" "848x480" "640x480" EndSubSection EndSection

    Read the article

  • How to know why my cd is not booting?

    - by Tom Brito
    I have downloaded Ubuntu 10.10 and burned the ISO but it will not boot. I discarded problems with the ISO, as I've downloaded from the official website with no errors, and burned it with no errors. I discarded problems with the burning, as looks like it was recorded with no errors here and later in another computer. I discarded problems with my DVD reader as other cds boots fine. I'm currently using Ubuntu 9.10, I know I can upgrade via internet, but I have this same problem with my Windows XP cd, so I really would like to discover what's going on here.. My Ubuntu 9.10 cd boots just right, but the new one not. What else can be? Or what more precise tests can I make to discover where's the problem? --More info What happens when I try to boot with the Ubuntu 10.10 cd is that it behavior like there's no bootable cd in the drive. It just don't find the boot on the cd, and start the HD system. My notebook is an Amazon PC Intel Celeron 1.5 with 2Gb memory, a DVD-RW driver, HD samsung with 260GB.

    Read the article

  • my web cam not working prperly in skype 12.04

    - by elvin
    oller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #3 (rev 02) 00:1d.3 USB controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #4 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c2) 00:1f.0 ISA bridge: Intel Corporation 82801EB/ER (ICH5/ICH5R) LPC Interface Bridge (rev 02) 00:1f.1 IDE interface: Intel Corporation 82801EB/ER (ICH5/ICH5R) IDE Controller (rev 02) 00:1f.2 IDE interface: Intel Corporation 82801EB (ICH5) SATA Controller (rev 02) 00:1f.3 SMBus: Intel Corporation 82801EB/ER (ICH5/ICH5R) SMBus Controller (rev 02) 00:1f.5 Multimedia audio controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) AC'97 Audio Controller (rev 02) 01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) 01:08.0 Ethernet controller: Intel Corporation 82562EZ 10/100 Ethernet Controller (rev 01) This is the configuration of my web cam when i typed lspci in terminal; how do i find out what type of cam it is. the command letters i wrote here may be wrong.never mind i wrote from my memory.my web cam shows green colour shade and no proper colour.pls help me to fix this problem.2nd option- if i buy microsoft web cam will the image will get corrected.i think my web cam is hyundai electronics make.

    Read the article

  • Can't run minecraft on ubuntu 12.04 lts [duplicate]

    - by user170011
    This question already has an answer here: How to correctly install and troubleshoot Minecraft (Client) 3 answers I was trying to run minecraft on my laptop with ubuntu 12.04 lts 64 bit. I have a lenovo ideapad p580 with 7.7 Gb and an Intel® Core™ i7-3520M CPU @ 2.90GHz × 4 processor. Under the graphics section of the system overview in ubuntu it says I have none installed. My computer comes with and nvidia geforce graphics card but it isnt recognized. When I start minecraft I get this crash report. ---- Minecraft Crash Report ---- // Shall we play a game? Time: 24/06/13 7:23 PM Description: Failed to start game org.lwjgl.LWJGLException: Could not init GLX at org.lwjgl.opengl.LinuxDisplayPeerInfo.initDefaultPeerInfo(Native Method) at org.lwjgl.opengl.LinuxDisplayPeerInfo.<init>(LinuxDisplayPeerInfo.java:52) at org.lwjgl.opengl.LinuxDisplay.createPeerInfo(LinuxDisplay.java:684) at org.lwjgl.opengl.Display.create(Display.java:854) at org.lwjgl.opengl.Display.create(Display.java:784) at org.lwjgl.opengl.Display.create(Display.java:765) at net.minecraft.client.Minecraft.a(SourceFile:235) at avv.a(SourceFile:56) at net.minecraft.client.Minecraft.run(SourceFile:507) at java.lang.Thread.run(Thread.java:679) A detailed walkthrough of the error, its code path and all known details is as follows: -- System Details -- Details: Minecraft Version: 1.5.2 Operating System: Linux (amd64) version 3.5.0-34-generic Java Version: 1.6.0_27, Sun Microsystems Inc. Java VM Version: OpenJDK 64-Bit Server VM (mixed mode), Sun Microsystems Inc. Memory: 406175448 bytes (387 MB) / 514523136 bytes (490 MB) up to 1908932608 bytes (1820 MB) JVM Flags: 2 total; -Xmx2048M -Xms512M AABB Pool Size: 0 (0 bytes; 0 MB) allocated, 0 (0 bytes; 0 MB) used Suspicious classes: No suspicious classes found. IntCache: cache: 0, tcache: 0, allocated: 0, tallocated: 0 LWJGL: 2.4.2 OpenGL: ~~ERROR~~ NullPointerException: null Is Modded: Probably not. Jar signature remains and client brand is untouched. Type: Client (map_client.txt) Texture Pack: Default Profiler Position: N/A (disabled) Vec3 Pool Size: ~~ERROR~~ NullPointerException: null I can run it on different versions of linux such as fedora.

    Read the article

  • Ubuntu Slow - What architecture does the Windows Installer install?

    - by Benjamin Yep
    I feel absolutely limited by using Windows, and I need to switch to a Unix environment. I once installed Red Hat on my lappie (screen + external monitor setup; 4GB ram; x64; runs fast) and it worked fine, but I saw that the computer cluster that is the birthplace of my unix knowledge switched to Ubuntu, so naturally I follow. To the point. When I installed Ubuntu onto my machine via the Windows Installer, it ran quite slow. Opening Firefox takes about 8-9 seconds, it freezes up often, unable to handle its own background processes. I saw in a thread that, perhaps, it is running slow because the Windows Installer is installing an x64 version. Of course, my computer has had no performance issues in the past(except that time with the trojans but you know, know one is perfect ;) ) Anyways, I uninstalled Ubuntu, freeing up the max allocated memory it took up, and continue to be sad, trapped in my MS world with only a buggy Cygwin, any assistance is greatly appreciated! :) Thanks ~Ben

    Read the article

  • A Cost Effective Solution to Securing Retail Data

    - by MichaelM-Oracle
    By Mike Wion, Director, Security Solutions, Oracle Consulting Services As so many noticed last holiday season, data breaches, especially those at major retailers, are now a significant risk that requires advance preparation. The need to secure data at all access points is now driven by an expanding privacy and regulatory environment coupled with an increasingly dangerous world of hackers, insider threats, organized crime, and other groups intent on stealing valuable data. This newly released Oracle whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c outlines a powerful story related to a defense in depth, multi-layered, security model that includes preventive, detective, and administrative controls for data security. At Oracle Consulting Services (OCS), we help to alleviate the fears of massive data breach by providing expert services to assist our clients with the planning and deployment of Oracle’s Database Security solutions. With our deep expertise in Oracle Database Security, Oracle Consulting can help clients protect data with the security solutions they need to succeed with architecture/planning, implementation, and expert services; which, in turn, provide faster adoption and return on investment with Oracle solutions. On June 10th at 10:00AM PST , Larry Ellison will present an exclusive webcast entitled “The Future of Database Begins Soon”. In this webcast, Larry will launch the highly anticipated Oracle Database In-Memory technology that will make it possible to perform true real-time, ad-hoc, analytic queries on your organization’s business data as it exists at that moment and receive the results immediately. Imagine real-time analytics available across your existing Oracle applications! Click here to download the whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c.

    Read the article

  • Uses of persistent data structures in non-functional languages

    - by Ray Toal
    Languages that are purely functional or near-purely functional benefit from persistent data structures because they are immutable and fit well with the stateless style of functional programming. But from time to time we see libraries of persistent data structures for (state-based, OOP) languages like Java. A claim often heard in favor of persistent data structures is that because they are immutable, they are thread-safe. However, the reason that persistent data structures are thread-safe is that if one thread were to "add" an element to a persistent collection, the operation returns a new collection like the original but with the element added. Other threads therefore see the original collection. The two collections share a lot of internal state, of course -- that's why these persistent structures are efficient. But since different threads see different states of data, it would seem that persistent data structures are not in themselves sufficient to handle scenarios where one thread makes a change that is visible to other threads. For this, it seems we must use devices such as atoms, references, software transactional memory, or even classic locks and synchronization mechanisms. Why then, is the immutability of PDSs touted as something beneficial for "thread safety"? Are there any real examples where PDSs help in synchronization, or solving concurrency problems? Or are PDSs simply a way to provide a stateless interface to an object in support of a functional programming style?

    Read the article

  • Ubuntu server 11.04 recognize only 1 core instead of 4

    - by Kreker
    I searched for other questions and googled a lot but I don't find a solution for solving this problem. Ubuntu Server 11.04 64bit installed on Dell Poweredge with Intel Xeon X5450. He only recognize 1 of the 4 cores I have. Tried to modify the GRUB config but didn't work. IN the machine BIOS I didn't find anything useful. CPU root@darwin:~# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU X5450 @ 3.00GHz stepping : 10 cpu MHz : 2992.180 cache size : 6144 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 xsave lahf_lm dts tpr_shadow vnmi flexpriority bogomips : 5984.36 clflush size : 64 cache_alignment : 64 address sizes : 38 bits physical, 48 bits virtual power management: GRUB root@darwin:~# cat /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=2 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="" GRUB_CMDLINE_LINUX="noapic nolapic" #was with acpi=off # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" Complete dmesg Too long, posted on pastebin http://pastebin.com/bsKPBhzu

    Read the article

  • How to avoid game objects accidentally deleting themselves in C++

    - by Tom Dalling
    Let's say my game has a monster that can kamikaze explode on the player. Let's pick a name for this monster at random: a Creeper. So, the Creeper class has a method that looks something like this: void Creeper::kamikaze() { EventSystem::postEvent(ENTITY_DEATH, this); Explosion* e = new Explosion; e->setLocation(this->location()); this->world->addEntity(e); } The events are not queued, they get dispatched immediately. This causes the Creeper object to get deleted somewhere inside the call to postEvent. Something like this: void World::handleEvent(int type, void* context) { if(type == ENTITY_DEATH){ Entity* ent = dynamic_cast<Entity*>(context); removeEntity(ent); delete ent; } } Because the Creeper object gets deleted while the kamikaze method is still running, it will crash when it tries to access this->location(). One solution is to queue the events into a buffer and dispatch them later. Is that the common solution in C++ games? It feels like a bit of a hack, but that might just be because of my experience with other languages with different memory management practices. In C++, is there a better general solution to this problem where an object accidentally deletes itself from inside one of its methods?

    Read the article

  • Ubuntu 12.04 Frequent Kernel Panics

    - by Jordan Johns
    Dealing with what seems to be quantum behavior here. I'm having frequent Kernel panics and the problem is very hard to locate. To cut to the chase of what is going on I am essentially getting this screen: This occurs always after login (login screen is fine, can go into my TTY's and everything). But once I log in I have anywhere from 0 (instant) to 10 seconds before everything grinds to a halt and freezes. The only way I found out that it was a kernel panic was going into a TTY while it happens. At first, I figured something must be wrong with memory. Ran memtest86, full standard test. No errors. OK. CPU problem? Windows works fine, ran prime95 and a bunch of other torture tests for many hours. Drive problem? Nope. fsck reports no problems and the smart status is ok. Overclocking problem? Nope. Windows is working fine, and I also reset the bios to prove the point it was not the cause. At a loss here, even trying to reinstall the thing it gives me the same problem even attempting to install ubuntu (like right after you try to select "install ubuntu" or "try ubuntu first before installing"). Anyone have any thoughts? Its very bizarre, its as if my system hates linux and is trying to tell me not to use it (that would be a terrible reality). Thanks!

    Read the article

  • How to improve Minecraft-esque voxel world performance?

    - by SomeXnaChump
    After playing Minecraft I marveled a bit at its large worlds but at the same time I found them extremely slow to navigate, even with a quad core and meaty graphics card. Now I assume Minecraft is fairly slow because: A) It's written in Java, and as most of the spatial partitioning and memory management activities happen in there, it would naturally be slower than a native C++ version. B) It doesn't partition its world very well. I could be wrong on both assumptions; however it got me thinking about the best way to manage large voxel worlds. As it is a true 3D world, where a block can exist in any part of the world, it is basically a big 3D array [x][y][z], where each block in the world has a type (i.e BlockType.Empty = 0, BlockType.Dirt = 1 etc.) Now, I am assuming to make this sort of world perform well you would need to: A) Use a tree of some variety (oct/kd/bsp) to split all the cubes out; it seems like an oct/kd would be the better option as you can just partition on a per cube level not a per triangle level. B) Use some algorithm to work out which blocks can currently be seen, as blocks closer to the user could obfuscate the blocks behind, making it pointless to render them. C) Keep the block object themselves lightweight, so it is quick to add and remove them from the trees. I guess there is no right answer to this, but I would be interested to see peoples' opinions on the subject. How would you improve performance in a large voxel-based world?

    Read the article

  • FILESTREAM in SQL Server 2008 R2

    - by CatherineRussell
    Much data is unstructured, such as text documents, images, and videos. This unstructured data is often stored outside the database, separate from its structured data. This separation can cause data management complexities. Or, if the data is associated with structured storage, the file streaming capabilities and performance can be limited. FILESTREAM integrates the SQL Server Database Engine with an NTFS file system by storing varbinary(max) binary large object (BLOB) data as files on the file system. Transact-SQL statements can insert, update, query, search, and back up FILESTREAM data. Win32 file system interfaces provide streaming access to the data. FILESTREAM uses the NT system cache for caching file data. This helps reduce any effect that FILESTREAM data might have on Database Engine performance. The SQL Server buffer pool is not used; therefore, this memory is available for query processing. FILESTREAM data is not encrypted even when transparent data encryption is enabled. To read more, go to: http://technet.microsoft.com/en-us/library/bb933993.aspx

    Read the article

  • BusEnum2 and a Minor Bug Fix

    - by Kate Moss' Open Space
    The default root bus driver, BusEnum, enumerate and active drivers one by one in synchronized manner. It is not only slowing the boot time but in the even if any of driver's init function (XXX_init) get hanged, the whole system won't boot at all. There is a sample of enhanced root bus driver, BusEnum2, on the http://msdn.microsoft.com/en-us/library/dd187254.aspx The page provides the sample code and the detail explanation of the design concept. With multi-threaded BusEnum2 on CE7 with SMP enabled system, the scalability is even more significant. Since you have more than one processor and it can load drivers in parallel! Everything looks good so far, except to there is a small bug in the sample code. Fortunately, it is easy to fix. But hard to trace if you ever enc outer it! The BUSENUM2 flag only defined in BUSENUM2\BUSDEF\sources but not in BUSENUM2\BUSENUM\sources. The DeviceFolder is implemented in BUSENUM2\BUSDEF but the instance is created in BUSENUM2\BUSENUM\busenum.cpp, so the result is it allocates less memory than actual need.   Add   CDEFINES=$(CDEFINES) -DBUSENUM2   into BUSENUM2\BUSENUM\sources and the problem fixed!

    Read the article

  • Effect of using dedicated NVidia card instead of Intel HD4000

    - by Sman789
    Short version: Can someone please advise me of the effect of adding a dedicated NVIDIA GeForce GT 630M card to an Ubuntu laptop in terms of power consumption and performance gains/losses when doing general productivity tasks and booting up. Also, how good are the closed source, open source, and Bumblebee drivers for these newer cards compared to support for the Intel HD4000? Long version/Background, if any info here is helpful: I'm thinking of ordering a laptop from PC Specialist (a UK company who actually sell machines without Windows pre-installed) with the following specifications: Genesis IV: 15.6" AUO Matte 95% Gamut LED Widescreen (1920x1080) Intel® Core™i5 Dual Core Mobile Processor i5-3210M (2.50GHz) 3MB 4GB SAMSUNG 1600MHz SODIMM DDR3 MEMORY (1 x 4GB) 120GB INTEL® 520 SERIES SSD, SATA 6 Gb/s (upto 550MB/sR | 520MB/sW) Intel 2 Channel High Definition Audio + MIC/Headphone Jack GIGABIT LAN & WIRELESS INTEL® N135 802.11N (150Mbps) + BLUETOOTH Now, as I want this laptop mainly for work and not for games, I would be more than content with the HD4000 integrated chip which comes with the processor. However, for compatibility reasons, I am not able to get the specs I want unless I choose a NVIDIA GeForce GT 630M 1GB graphics card, which I don't have a great deal of use for. I'm willing to buy it, however, as it's still cheaper than any other laptop with the specs I want. However, I know that Linux power management isn't fantastic with open-source graphics drivers, and I don't much about Bumblebee. Basically, whilst I'm happy to 'tolerate' the card being there, I don't want to experience any negative effects on the rest of my system (battery, performance etc) and if there are likely to be any, I might reconsider my purchase. So if anyone can advise me on the effects, I would be very grateful, since I doubt I can just turn the card off. Thankyou for any assistance :)

    Read the article

  • Start frei für die Exadata Community im neuen Look!

    - by Frank Schneede (Exadata Community)
    Endlich ist es soweit! Pünktlich mit dem Start der DOAG Konferenz 2012, die vom 20.11. - 22.11.2012 in Nürnberg stattfindet, geht die Deutsche Exadata Community in völlig neu gestaltetem Outfit an den Start. Sie werden hier regelmäßig über neue Ankündigungen sowie Tipps und Tricks im Umgang mit Exadata informiert. Durch das freiere Blogformat werden an dieser Stelle auch Berichte über Exadata Projekte erscheinen, die besonders hervorhebenswert sind. Ich denke, Sie dürfen gespannt sein! Vieles hat sich seit dem letzten Update in der Community getan, denn auf der diesjährigen Oracle Open World in San Franzisco wurde eine ganze Reihe spannender Ankündigungen rund um Exadata gemacht. Die kürzlich vorgestellten Modelle Exadata Database Machine X3-2 und X3-8 sind in der grundlegenden Architektur zwar unverändert geblieben, jedoch sind die Modelle mit aktuellen Prozessoren in SandyBridge Mikroprozessorarchitektur noch leistungsfähiger als bisher. Der vierfach vergrößerte Flash Cache nimmt wesentlich mehr Daten auf und macht die Exadata so zur "In-Memory" Database Machine. Mit der neuen Exadata Software 11.2.3.2 kann der Flash Cache nun als persistenter Write Back Flash Cache verwendet werden. Durch das neuartige Caching profitieren auch OLTP Applikationen, die eine hohe Last von schreibenden Transaktionen verursachen, stärker von der Exadata Technologie. Ein neues Einstiegsmodell, das Exadata X3-2 Eighth Rack, vervollständigt die Produktfamilie und senkt abermals die Einstiegshürde für die Kunden.  Die beiden Community Tipps zur Exadata Hardware wurden aktualisiert. Lesen Sie alles über die Exadata Database Machine X3-2 und deren große Schwester, die Exadata Database Machine X3-8.

    Read the article

  • Why has my internet speed dropped down?

    - by Door Knob
    I recently switched to Ubuntu. I've been having a lot of internet troubles ever since. I used Windows 7 before. I've had trouble loading web pages, and it would take a solid minute or two to even start displaying anything. Why is this? How can I fix this? Details: Ubuntu 14.04 ifconfig: eth0 Link encap:Ethernet HWaddr f0:4d:a2:2c:59:42 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:21 Memory:f7ae0000-f7b00000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:785 errors:0 dropped:0 overruns:0 frame:0 TX packets:785 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:70511 (70.5 KB) TX bytes:70511 (70.5 KB) wlan0 Link encap:Ethernet HWaddr 14:da:e9:b0:9d:66 inet addr:192.168.2.12 Bcast:192.168.2.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11979 errors:0 dropped:0 overruns:0 frame:0 TX packets:10503 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:13659505 (13.6 MB) TX bytes:1449698 (1.4 MB) Here's a comparison: Speedtest on my phone: Speedtest on my PC: Taken about 30 seconds apart.

    Read the article

  • Is the Alternate Ubuntu installer still required for LVM or Software RAID setup?

    - by jimp
    Over the past 5 years, I have been setting up Ubuntu servers using the Alternate installer. I need to provision a new server today, and I'm curious if the Alternate CD is still the only way to setup LVM/RAID at installation time. I'm my limited experience with Red Hat Enterprise Linux, I noticed it's single installer configures LVM automatically. Has Ubuntu's installer, at least the standard "Server" installer, added support for LVM/RAID, or is the Alternate installer still required for that kind of server setup? http://mirror.anl.gov/pub/ubuntu-iso/DVDs/ubuntu/12.04.1/release/ Alternate install CD The alternate install CD allows you to perform certain specialist installations of Ubuntu. It provides for the following situations: setting up automated deployments; upgrading from older installations without network access; LVM and/or RAID partitioning; installs on systems with less than about 384MiB of RAM (although note that low-memory systems may not be able to run a full desktop environment reasonably). LVM has always been fundamental for our server needs, so I'm surprised if it is still not considered a server-worthy feature.

    Read the article

  • TechEd North America 2012–Day 3 #msTechEd #teched

    - by Marco Russo (SQLBI)
    Yesterday I spent the longest day at this TechEd: we talked with many people at Community Night until 9pm and I have to say that just a few months after Analysis Services 2012 has been released, there are many people already using it. And the adoption of PowerPivot is starting to be quite large. Many new ideas and challenging coming from several different real world scenarios. I was tired but really happy. Alberto presented his Many-to-Many Relationships in BISM Tabular session that was in the same time slot of the BI Power Hour. For this reason, very few people attended Alberto’s session so I think many will watch the recorded session (it should be available within a few days). So what about today? I’ll spend some time at Technical Learning Center area (full schedule here) but the most important event today will be the Querying multi-billion rows with many to many relationships in SSAS Tabular (xVelocity) at the Private Cloud, Public Cloud and Data Platform Theater in the Technical Learning Center area (next to the SQL Server 2012 zone).  Why you should attend? Mainly because you will see live demo over 4 billion rows table with many-to-many relationships involved in complex queries. But for those of you that think this is not enough to attend a 15 minute funny session, well, we’ll give away some 8GB USB Memory Keys to those of you that will guess exact response time of queries before execution. Convinced? Join us at 11:15am and don’t be late, the session will finish at 11:30am! After that, we’ll run a book signing session at the Bookstore at 12:30pm and I will be in the Technical Learning Center area at 3:00pm until 5:00pm. See you there!

    Read the article

  • Concurrency pattern of logger in multithreaded application

    - by Dipan Mehta
    The context: We are working on a multi-threaded (Linux-C) application that follows a pipeline model. Each module has a private thread and encapsulated objects which do processing of data; and each stage has a standard form of exchanging data with next unit. The application is free from memory leak and is threadsafe using locks at the point where they exchange data. Total number of threads is about 15- and each thread can have from 1 to 4 objects. Making about 25 - 30 odd objects which all have some critical logging to do. Most discussion I have seen about different levels as in Log4J and it's other translations. The real big questions is about how the overall logging should really happen? One approach is all local logging does fprintf to stderr. The stderr is redirected to some file. This approach is very bad when logs become too big. If all object instantiate their individual loggers - (about 30-40 of them) there will be too many files. And unlike above, one won't have the idea of true order of events. Timestamping is one possibility - but it is still a mess to collate. If there is a single global logger (singleton) pattern - it indirectly blocks so many threads while one is busy putting up logs. This is unacceptable when processing of the threads are heavy. So what should be the ideal way to structure the logging objects? What are some of the best practices in actual large scale applications? I would also love to learn from some of the real designs of large scale applications to get inspirations from!

    Read the article

  • LIfebook lh531 inbuilt card reader not working

    - by chandrasekar
    Inbuilt card reader (SD/PRO/SDHC) not working. When I insert the memory card the indicator comes for a milli second and nothing happens. When I do lspci it gives the out put which is pasted below: I use Ubuntu 11.10. Pl help pro-hq@prohq-LIFEBOOK-LH531:~$ lspci 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB Controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b5) 00:1d.0 USB Controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 01:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 (rev 34) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) pro-hq@prohq-LIFEBOOK-LH531:~$

    Read the article

  • Ubuntu 64bit Black Screen on Minecraft

    - by Signify
    I have tried posting on the forums, but I really need help (I'm a server admin and really don't want to have to switch to Windows just to run Minecraft). Anyhow, I originally was running openjdk6 as I was told that 7 was unstable and was getting periodical lag spikes while walking (at least once every 3 seconds the screen would freeze for a tenth of a second). After that, I attempted to install Sun's Java JDK7 (I couldn't get ahold of 6 without signing up for Oracle's newsletters). Upon attempting to run Minecraft, I got a black screen after logging in with this error message: 27 achievements 182 recipes Setting user: Thunder7102, -1618112820878091307 Exception in thread "Minecraft main thread" java.lang.UnsatisfiedLinkError: /home/noiro/.minecraft/bin/natives/liblwjgl.so: /home/noiro/.minecraft/bin/natives/liblwjgl.so: wrong ELF class: ELFCLASS32 (Possible cause: architecture word width mismatch) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1939) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1864) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1825) at java.lang.Runtime.load0(Runtime.java:792) at java.lang.System.load(System.java:1059) at org.lwjgl.Sys$1.run(Sys.java:69) at java.security.AccessController.doPrivileged(Native Method) at org.lwjgl.Sys.doLoadLibrary(Sys.java:65) at org.lwjgl.Sys.loadLibrary(Sys.java:81) at org.lwjgl.Sys.<clinit>(Sys.java:98) at org.lwjgl.opengl.Display.<clinit>(Display.java:132) at net.minecraft.client.Minecraft.a(SourceFile:184) at net.minecraft.client.Minecraft.run(SourceFile:657) at java.lang.Thread.run(Thread.java:722) Now, this got me fed up, so I tried to install a Windows 7 virtual machine through virtualbox, I gave it 256mb of graphics memory with 2D and 3D acceleration and 3GB of RAM. I installed Java JDK7 for Windows (which does work from experience on my other Windows 7 partition). Once again, a black screen after login. What the heck is going on guys? My System Specs: Ubuntu 12.04 64bit Fully Updated running Gnome3 Nvidia GTS 450 1.3GB OC'd AMD Athlon II 4x 2.8Ghz 6GB of RAM So, what do you think?

    Read the article

  • GPU based procedual terrain borders?

    - by OnePie
    I'm working on a game that preferibly should feature a combination of designed and procedually generated terrain where the designer specifies in somewhat detailed terms what type of terrain a given area will have (grasslands, forest etc...) and then a precedual algorithm takes care of the rest. I'm not talking about minecraft style biomoes, but rather the game map for a strategy game. Each 'area' will not take up that much of the screen, and thus be more akin to a tile whose texture is procedually generated. While procedually generating terrain textures on the GPU are not that difficult, the hard part is making the borders between them look good. Currently, the 'tiles' are large enough to be visible (due to memory constraints mainly, we are talking planetary sized textures for a game taking place in space and on a continental ground view with seamless transitions between them) and creating good borders between them with an algorithm that is fast enough to be useful has proven difficult. Sampling the n-surrounding pixels and using the combiened result did not yield very good borders and was fairly slow on the GPU to boot (ca 12ms for me, that is without any lighning or shading and with very simple terrain texture shaders). So are there any practical known methods to solve this problem?

    Read the article

  • Visually and audibly unambiguous subset of the Latin alphabet?

    - by elliot42
    Imagine you give someone a card with the code "5SBDO0" on it. In some fonts, the letter "S" is difficult to visually distinguish from the number five, (as with number zero and letter "O"). Reading the code out loud, it might be difficult to distinguish "B" from "D", necessitating saying "B as in boy," "D as in dog," or using a "phonetic alphabet" instead. What's the biggest subset of letters and numbers that will, in most cases, both look unambiguous visually and sound unambiguous when read aloud? Background: We want to generate a short string that can encode as many values as possible while still being easy to communicate. Imagine you have a 6-character string, "123456". In base 10 this can encode 10^6 values. In hex "1B23DF" you can encode 16^6 values in the same number of characters, but this can sound ambiguous when read aloud. ("B" vs. "D") Likewise for any string of N characters, you get (size of alphabet)^N values. The string is limited to a length of about six characters, due to wanting to fit easily within the capacity of human working memory capacity. Thus to find the max number of values we can encode, we need to find that largest unambiguous set of letters/numbers. There's no reason we can't consider the letters G-Z, and some common punctuation, but I don't want to have to go manually pairwise compare "does G sound like A?", "does G sound like B?", "does G sound like C" myself. As we know this would be O(n^2) linguistic work to do =)...

    Read the article

  • Ubuntu 11.10 not starting in Graphical mode?

    - by iammilind
    I am in deep trouble. I am using Ubuntu 11.10 in dual boot mode with XP. Originally my touch pad was not working, (sometimes). To fix that, I installed something. After reboot my Ubuntu is not booting up!! I have several software pkgs already installed in past several weeks so reinstall of OS is my last resort. Can someone help me to get it rebooted in graphical mode? I had followed the procedure mentioned in this thread as well. With that link, somehow I am able to restart in text mode. But no luck after that. I am not being able to go back on graphical mode. The important outputs of lspci are following: 00.00.0 Host bridge: Intel Corporation Mobile PM965/GM965/GL960 Memory Controller Hub (rev 0c) 00:02.0 VGA compatible controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (primary) (rev 0c) 00:02.0 Display controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (primary) (rev 0c) I am attaching few snapshots, for more details on hardware.

    Read the article

  • Creating a SQL Azure Database Should be Easier

    - by Ken Cox [MVP]
    Every time I try to create a database + tables + data for Windows Azure SQL I get errors.  One of them is 'Filegroup reference and partitioning scheme' is not supported in this version of SQL Server.' It’s partly due to my poor memory (since I’ve succeeded before) and partly due to the failure of tools that should be helping me. For example, when I want to create a script from an existing database on my local workstation, I use SQL Server Management Studio (currently v 11.0.2100.60).  I go to Tasks > Generate Scripts which brings up the nice Generate and Publish Scripts wizard. When I go into the Advanced button, under Script for Server Version, why don’t I see SQL Azure as an option by now? The tool should be sorting this out for me, right? Maybe this is available in SQL Server Data Tools? I haven’t got into that yet. Just merge the functionality with SSMS, please. Anyway, I pick an older version of SQL for the target and still need to tweak it for Azure. For example, I take out all the “[dbo].” stuff. Why is it put there by the wizard? I also have to get rid of "ON [PRIMARY]"  to deal with the error I noted at the top. Yes, there’s information on what a table needs to look like in SQL Azure but the tools should know this so I don’t have to mess with it.

    Read the article

< Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >