Search Results

Search found 1309 results on 53 pages for 'amd athlon'.

Page 49/53 | < Previous Page | 45 46 47 48 49 50 51 52 53  | Next Page >

  • Ubuntu 13.04 installation issues: unable to handle kernel paging request error

    - by user173944
    I wish I could say that I’ve done more for the Linux community as of recent but I am very VERY new to all of this and I feel very much in over my head. I figured I would install Ubuntu. on my computer and then I would learn and contribute to the community simultaneously. I will try to be as detailed as I can, please ask questions if you need clarification. I installed Ubuntu. 13.04 (64-bit) on my dell Inspiron 1501. This has an AMD Turion 64-bit TL-56 1.8 Ghz mobile processor. It is a dual core. It has an ATI Radeon Xpress 1150 chipset in it as well. As of right now I only have a total of 2Ghz ram, however I was planning on upgrading that in the near future so I opted for the 64-bit Ubuntu. 13.04. I first tried the live CD and everything seemed to be functioning correctly other than the wireless (but that's not the issue at hand, there are plenty of guides on the internet on how to get that functioning). The internet worked just fine when it was plugged in so no issues there. However, once I went from that to installing 13.04 (just 13.04, no dual partitioning... I want this computer to run strictly Ubuntu.) it did not work. It took me into a shell that I could not type anything into. In this shell it said Bug: unable to handle kernel paging and then it called a bunch of traces and froze up. I had to hard reset the laptop. I tried the boot-repair program multiple times with many different settings and typically after starting up the laptop would say something along the lines of recursive errors. will attempt to fix and then it would attempt to fix a couple of things, and then the computer would freeze up after the text said end trace... so I had to hard reset it again. I'm not an impatient person either, when I say it would freeze up it would be for a period of at least 15 minutes each time before I decided to hard reset. I attempted to install 12.10 on it instead and I got the same exact message, and when I ran boot-repair it did the same exact thing as before. I am currently in the process of running memtest64+ on the computer's memory, though I really don't believe that, nor any of the hardware is at fault due to the fact that it was still running windows vista perfectly when I had decided to switch over to Ubuntu. so far the memtest has came back just fine without any errors, but I’ve only been running it for approximately an hour. So this is the situation I’m in. I did notice when I was using the live disk that the video driver needed updated so I performed that, though I’m fairly certain that has nothing to do with this. I have also attempted (though I’m not certain that my attempt was successful in accomplishing what I had planned) to manually edit the grub settings by making acpi=0 along top of adding nomodeset to the boot commands. Like I said, I’m not sure I did that correctly though, but I’m fairly certain I did. If anyone needs any more information I will be more than happy to provide it, I will post back once I get the full results of the memtest. I very much appreciate any ideas anyone else has, I’ve been at this for a few days to no avail... thank you

    Read the article

  • It's like I'm in recovery mode after update, but I'm not

    - by mawburn
    I used the Ubuntu software updater and updated to the most recent packages. After the last update today, it's like I have gone into recovery mode, but I haven't. I am running UbuntuGNOME First, everything looks like this: Switching to dark mode does nothing. Also, default applications do not work. Such as Startup and the default screenshot application. Everything was working fine before the latest software update. System Info Ubuntu 14.04 LTS Gnome-Shell 3.10.4 Kernel 3.13.0-29 I can't figure out how to get an update history, but this is almost a fresh install. It's about a week old install and this is the 3rd time I've used the Ubuntu Software Update. I am running AMD ATI HD6700 with the proprietary Catalyst drivers. I tried to provide all information that I thought would be useful, if you need any more please let me know. Edit - I believe something went wrong within these updates: Update Log: Start-Date: 2014-06-09 19:07:07 Commandline: aptdaemon role='role-commit-packages' sender=':1.68' Install: libgnome-desktop-3-10:amd64 (3.12.0-0~eugenesan~trusty2) Upgrade: gnome-session-common:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), gnome-session-bin:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), gir1.2-gnomedesktop-3.0:amd64 (3.8.4-0ubuntu3, 3.12.0-0~eugenesan~trusty2), gnome-session:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), python-libxml2:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libspice-server1:amd64 (0.12.4-0nocelt2, 0.12.4-0nocelt2.02~eugenesan~trusty1), gir1.2-mutter-3.0:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), xserver-xorg-video-qxl:amd64 (0.1.1-0ubuntu3, 0.1.1-0ubuntu3.01), libxml2:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libxml2:i386 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), gnome-desktop3-data:amd64 (3.8.4-0ubuntu3, 3.12.0-0~eugenesan~trusty2), mutter:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), mutter-common:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), libxml2-utils:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libmutter0c:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1) End-Date: 2014-06-09 19:07:12 I also installed Citrix Receiver today, following the tutorial here: Citrix Receiver 12.1 on Ubuntu 14.04 64-bit Log Start-Date: 2014-06-09 18:59:06 Commandline: apt-get install libmotif4:i386 nspluginwrapper lib32z1 libc6-i386 libxp6:i386 libxpm4:i386 libasound2:i386 Install: libmotif-common:amd64 (2.3.4-5, automatic), libatk1.0-0:i386 (2.10.0-2ubuntu2, automatic), libxft2:i386 (2.3.1-2, automatic), libgraphite2-3:i386 (1.2.4-1ubuntu1, automatic), nspluginviewer:i386 (1.4.4-0ubuntu5, automatic), libpango-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libxcursor1:i386 (1.1.14-1, automatic), libmotif4:i386 (2.3.4-5), libxm4:amd64 (2.3.4-5, automatic), libxm4:i386 (2.3.4-5, automatic), libxp6:i386 (1.0.2-1ubuntu1), libpangocairo-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libxcb-render0:i386 (1.10-2ubuntu1, automatic), libthai0:i386 (0.1.20-3, automatic), libharfbuzz0b:i386 (0.9.27-1, automatic), libpixman-1-0:i386 (0.30.2-2ubuntu1, automatic), libpangoft2-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libcairo2:i386 (1.13.0~20140204-0ubuntu1, automatic), lib32z1:amd64 (1.2.8.dfsg-1ubuntu1), libjasper1:i386 (1.900.1-14ubuntu3, automatic), libgtk2.0-0:i386 (2.24.23-0ubuntu1.1, automatic), nspluginwrapper:amd64 (1.4.4-0ubuntu5), libuil4:amd64 (2.3.4-5, automatic), libuil4:i386 (2.3.4-5, automatic), libxcb-shm0:i386 (1.10-2ubuntu1, automatic), libxmu6:i386 (1.1.1-1, automatic), libc6-i386:amd64 (2.19-0ubuntu6), libxinerama1:i386 (1.1.3-1, automatic), libgdk-pixbuf2.0-0:i386 (2.30.7-0ubuntu1, automatic), libxcomposite1:i386 (0.4.4-1, automatic), libmrm4:amd64 (2.3.4-5, automatic), libmrm4:i386 (2.3.4-5, automatic), libdatrie1:i386 (0.2.8-1, automatic), libxrandr2:i386 (1.4.2-1, automatic), libxpm4:i386 (3.5.10-1) End-Date: 2014-06-09 18:59:11

    Read the article

  • SQL 2012 Licensing Thoughts

    - by Geoff N. Hiten
    The only thing more controversial than new Federal Tax plans is new Licensing plans from Microsoft.  In both cases, everyone calculates several numbers.  First, will I pay more or less under this plan?  Second, will my competition pay more or less than now?  Third, will <insert interesting person/company here> pay more or less?  Not that items 2 and 3 are meaningful, that is just how people think. Much like tax plans, the devil is in the details, so lets see how this looks.  Microsoft shows it here: http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-licensing.aspx First up is a switch from per-socket to per-core licensing.  Anyone who didn’t see something like this coming should rapidly search for a new line of work because you are not paying attention.  The explosion of multi-core processors has made SQL Server a bargain.  Microsoft is in business to make money and the old per-socket model was not going to do that going forward. Per-core licensing also simplifies virtualization licensing.  Physical Core = Virtual Core, at least for licensing.  Oversubscribe your processors, that’s your lookout.  You still pay for  what is exposed to the VM.  The cool part is you can seamlessly move physical and virtual workloads around and the licenses follow.  The catch is you have to have Software Assurance to make the licenses mobile.  Nice touch there. Let’s have a moment of silence for the late, unlamented, largely ignored Workgroup Edition.  To quote the Microsoft  FAQ:  “Standard becomes our sole edition for basic database needs”.  Considering I haven’t encountered a singe instance of SQL Server Workgroup Edition in the wild, I don’t think this will be all that controversial. As for pricing, it looks like a wash with current per-socket pricing based on four core sockets.  Interestingly, that is the minimum core count Microsoft proposes to swap to transition per-socket to per-core if you are on Software Assurance.  Reading the fine print shows that if you are using more, you will get more core licenses: From the licensing FAQ. 15. How do I migrate from processor licenses to core licenses?  What is the migration path? Licenses purchased with Software Assurance (SA) will upgrade to SQL Server 2012 at no additional cost. EA/EAP customers can continue buying processor licenses until your next renewal after June 30, 2012. At that time, processor licenses will be exchanged for core-based licenses sufficient to cover the cores in use by processor-licensed databases (minimum of 4 cores per processor for Standard and Enterprise, and minimum of 8 EE cores per processor for Datacenter). Looks like the folks who invested in the AMD 12-core chips will make out like bandits. Now, on to something new: SQL Server Business Intelligence Edition. Yep, finally a BI-specific SKU licensed for server+CAL configurations only.  Note that Enterprise Edition still supports the complete feature set; the BI Edition is intended for smaller shops who want to use the full BI feature set but without needing Enterprise Edition scale (or costs).  No, you don’t get ColumnStore, Compression, or Partitioning in the BI Edition.  Those are Enterprise scale features, ThankYouVeryMuch.  Then again, your starting licensing costs are about one sixth of an Enterprise Edition system (based on an 8 core server). The only part of the message I am missing is if the current Failover Licensing Policy will change.  Do we need to fully or partially license failover servers?  That is a detail I definitely want to know.

    Read the article

  • Graphical driver 13.10 ATI RV630

    - by Michael Cephalus
    I started updating the distro from 13.04 to 13.10. Then I got my hands on a Radeon HD 2600. I installed the RV630 compatible Catalystdriver from the official webpage. Then xserver crashed everytime I opened a browser or vlc fx. I took notice that there was no driver listed in configuration underneath. michael@statubtunu:~$ lshw -c video WARNING: you should run this program as super-user. *-display UNCLAIMED description: VGA compatible controller product: RV630 PRO [Radeon HD 2600 PRO] vendor: Advanced Micro Devices, Inc. [AMD/ATI] physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: vga_controller bus_master cap_list configuration: latency=0 resources: memory:d0000000-dfffffff memory:e0500000-e050ffff ioport:1000(size=256) memory:e0000000-e001ffff i installed additional drivers from jockey and the ubuntu softwarecenter ati-driver. though that only made it to crash xserver completely and when i type: michael@statubtunu:~$ sudo startx X.Org X Server 1.14.3 Release Date: 2013-09-12 X Protocol Version 11, Revision 0 Build Operating System: Linux 3.2.0-37-generic i686 Ubuntu Current Operating System: Linux statubtunu 3.11.0-13-generic #20-Ubuntu SMP Wed Oct 23 17:26:33 UTC 2013 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.11.0-13-generic root=UUID=8fb2e395-0ea2-4f45-ac66-225696b7ce2c ro quiet splash vt.handoff=7 Build Date: 15 October 2013 09:23:29AM xorg-server 2:1.14.3-3ubuntu2 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.30.2 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Nov 12 18:50:02 2013 (==) Using system config directory "/usr/share/X11/xorg.conf.d" Initializing built-in extension Generic Event Extension Initializing built-in extension SHAPE Initializing built-in extension MIT-SHM Initializing built-in extension XInputExtension Initializing built-in extension XTEST Initializing built-in extension BIG-REQUESTS Initializing built-in extension SYNC Initializing built-in extension XKEYBOARD Initializing built-in extension XC-MISC Initializing built-in extension SECURITY Initializing built-in extension XINERAMA Initializing built-in extension XFIXES Initializing built-in extension RENDER Initializing built-in extension RANDR Initializing built-in extension COMPOSITE Initializing built-in extension DAMAGE Initializing built-in extension MIT-SCREEN-SAVER Initializing built-in extension DOUBLE-BUFFER Initializing built-in extension RECORD Initializing built-in extension DPMS Initializing built-in extension X-Resource Initializing built-in extension XVideo Initializing built-in extension XVideo-MotionCompensation Initializing built-in extension SELinux Initializing built-in extension XFree86-VidModeExtension Initializing built-in extension XFree86-DGA Initializing built-in extension XFree86-DRI Initializing built-in extension DRI2 Loading extension GLX ERROR: could not insert 'fglrx': No such device (II) [KMS] drm report modesetting isn't supported. (EE) (EE) Backtrace: (EE) 0: /usr/bin/X (xorg_backtrace+0x49) [0xb77780b9] (EE) 1: /usr/bin/X (0xb75d8000+0x1a3e24) [0xb777be24] (EE) 2: (vdso) (__kernel_rt_sigreturn+0x0) [0xb75b540c] (EE) 3: /usr/bin/X (xf86findOption+0x2a) [0xb7681daa] (EE) 4: /usr/bin/X (xf86findOptionValue+0x23) [0xb7681f43] (EE) 5: /usr/bin/X (0xb75d8000+0x7ebfd) [0xb7656bfd] (EE) 6: /usr/bin/X (xf86ProcessOptions+0x37) [0xb7657507] (EE) 7: /usr/lib/xorg/modules/libvbe.so (vbeDoEDID+0xe7) [0xb5eb8647] (EE) 8: /usr/lib/xorg/modules/drivers/vesa_drv.so (0xb5ee7000+0x287c) [0xb5ee987c] (EE) 9: /usr/bin/X (InitOutput+0xb23) [0xb7659c33] (EE) 10: /usr/bin/X (0xb75d8000+0x2a30b) [0xb760230b] (EE) 11: /lib/i386-linux-gnu/libc.so.6 (__libc_start_main+0xf5) [0xb71ba905] (EE) 12: /usr/bin/X (0xb75d8000+0x2a908) [0xb7602908] (EE) (EE) Segmentation fault at address 0x5 (EE) Fatal server error: (EE) Caught signal 11 (Segmentation fault). Server aborting (EE) (EE) Please consult the The X.Org Foundation support at for help. (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information. (EE) (EE) Server terminated with error (1). Closing log file. This is what comes, but no GUI. Is there any way to deal with this?

    Read the article

  • How to fix: Ubuntu 12.04 reboots after loading with elilo

    - by Casey
    I have an HP p6-2120 with CPU: AMD A6-3620 APU with Radeon Graphics RAM: 6GB BIOS: HO2_710.ROM v7.10 [AMI v7.10 4/19/2012] Disk: SATA1 (/dev/sda) - 1 TB (windows) Disk: SATA2 (/dev/sdb) - 1 TB partitioned using "parted -a optimal /dev/sdb" as follows: .. 1049KB 201MB FAT32 boot flag set .. 201MB 60GB ext2 (/) .. 68GB 78GB linux-swap(v1) (swap) .. 78GB 790GB ext4 (/home) .. - rest is "free" space reserved for other purposes (eventually) ubuntu: 12.04.1 LTS [specifically: Release 12.04 (precise) 64-bit] kernel: linux 3.2.0-29-generic I created a bootable EFI USB from the ISO (64-bit) which I downloaded. I can run and install from the USB without any problems. The BIOS is an EFI bios that appears to be capable of booting in either EFI or Legacy mode. Initially, I did the "standard" install with NOTHING on disk2, and let the installer configure everything. The net result of this was that when I started the computer and forced it into "boot" menu mode, it DOES NOT recognize SATA2 as an EFI drive, and when I attempt to "legacy" boot from it, I get the message "ERROR: No Boot Disk has been detected." The "standard" install created one large partition that consumed the entire disk. At that point, I manually partitioned the disk (using sudo parted -a optimal /dev/sdb) as described above. I selected the "other" install, and changed the /dev/sdb1 to "bios_grub", /dev/sdb2 as "/" (ext4), /dev/sdb3 as swap, and /dev/sdb4 as "/home". [Note: fearing that possibly elilo did not recognize ext4, I switched /dev/sdb2 to ext2 and re-insalled] The net result was that the install appeared to trash the /dev/sdb1 partition so that it was NOT readable by anything. I re-formated /dev/sdb1 as FAT32 and set the boot flag. I repeated the install ignoring the messages about no bios_grub partition. After several attempts to get GRUB2 to work, I switched to elilo. I downloaded the most recent version and copied it (elilo-3.14-ia64.efi) to /dev/sdb1/efi/boot/bootx64.efi. (The BIOS boot loader did not recognize it either as elilo-3.14.ia64.efi or as elilo.efi. Based on the advice in one of the web-pages I found, I renamed it to bootx64.efi. This worked.) In that same directory (/efi/boot), I copied the file pointed to the link in /dev/sdb2/vmlinuz to /efi/boot/vmlinuz, and the file pointed to the link in /dev/sdb2/initrd.img to /efi/boot/initrd.img. I created an elilo.conf file as follows: timeout=5000 prompt default=linux-boot image=vmlinuz label=linux-boot read-only initrd=initrd.img root=/dev/sdb2 The /efi/boot directory contains 4 files: bootx64.efi elilo.conf vmlinuz initrd.img When I power-cycle the computer and force the boot menu, drive2 shows up as an EFI bootable drive. When I select it, I get the elilo prompt. Pressing , it appears to load the kernal (I have tried it with verbose=5, and there is a long string of messages with the final one a command line to load the kernel and a series of several dots that fly by) then the screen goes blank, and it reboots the computer. [Note: I have also tried substituting the UUID as found in the /etc/fstab of the installed system for the root directory. This had no effect.] This is a brief synopsis of several nights of fiddling with this. I would deeply appreciate any help you can give.

    Read the article

  • DirectX particle system. ConstantBuffer

    - by Liuka
    I'm new in DirectX and I'm making a 2D game. I want to use a particle system to simulate a 3D starfield, so each star has to set its own constant buffer for the vertexshader es. to set it's world matrix. So if i have 500 stars (that move every frame) i need to call 500 times VSsetconstantbuffer, and map/unmap each buffer. with 500 stars i have an average of 220 fps and that's quite good. My bottelneck is Vs/PsSetconstantbuffer. If i dont call this function i have 400 fps(obliviously nothing is display, since i dont set the position of the stars). So is there a method to speed up the render of the particle system?? Ps. I'm using intel integrate graphic (hd 2000-3000). with a nvidia (or amd) gpu will i have the same bottleneck?? If, for example, i dont call setshaderresource i have 10-20 fps more (for 500 objcets), that is not 180.Why does SetConstantBuffer take so long?? LPVOID VSdataPtr = VSmappedResource.pData; memcpy(VSdataPtr, VSdata, CszVSdata); context->Unmap(VertexBuffer, 0); result = context->Map(PixelBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &PSmappedResource); if (FAILED(result)) { outputResult.OutputErrorMessage(TITLE, L"Cannot map the PixelBuffer", &result, OUTPUT_ERROR_FILE); return; } LPVOID PSdataPtr = PSmappedResource.pData; memcpy(PSdataPtr, PSdata, CszPSdata); context->Unmap(PixelBuffer, 0); context->VSSetConstantBuffers(0, 1, &VertexBuffer); context->PSSetConstantBuffers(0, 1, &PixelBuffer); this update and set the buffer. It's part of the render method of a sprite class that contains a the vertex buffer and the texture to apply to the quads(it's a 2d game) too. I have an array of 500 stars (sprite setup with a star texture). Every frame: clear back buffer; draw the array of stars; present the backbuffer; draw also call the function update( which calculate the position of the sprite on screen based on a "camera class") Ok, create a vertex buffer with the vertices of each quads(stars) seems to be good, since the stars don't change their "virtual" position; so.... In a particle system (where particles move) it's better to have all the object in only one vertices array, rather then an array of different sprite/object in order to update all the vertices' position with a single setbuffer call. In this case i have to use a dynamic vertex buffer with the vertices positions like this: verticesForQuad={{ XMFLOAT3((float)halfDImensions.x-1+pos.x, (float)halfDImensions.y-1+pos.y, 1.0f), XMFLOAT2(1.0f, 0.0f) }, { XMFLOAT3((float)halfDImensions.x-1+pos.x, -(float)halfDImensions.y-1+pos.y, 1.0f), XMFLOAT2(1.0f, 1.0f) }, { XMFLOAT3(-(float)halfDImensions.x-1+pos.x, (float)halfDImensions.y-1.pos.y, 1.0f), XMFLOAT2(0.0f, 0.0f) }, { XMFLOAT3(-(float)halfDImensions.x-1.pos.x, -(float)halfDImensions.y-1+pos.y, 1.0f), XMFLOAT2(0.0f, 1.0f) }, ....other quads} where halfDimensions is the halfsize in pixel of a texture and pos the virtual position of a star. than create an array of verticesForQuad and create the vertex buffer ZeroMemory(&vertexDesc, sizeof(vertexDesc)); vertexDesc.Usage = D3D11_USAGE_DEFAULT; vertexDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexDesc.ByteWidth = sizeof(VertexType)* 4*numStars; ZeroMemory(&resourceData, sizeof(resourceData)); resourceData.pSysMem = verticesForQuad; result = device->CreateBuffer(&vertexDesc, &resourceData, &CvertexBuffer); and call each frame Context->IASetVertexBuffers(0, 1, &CvertexBuffer, &stride, &offset); But if i want to add and remove obj i have to recreate the buffer each time, havent i?? There is a faster way? I think i can create a vertex buffer with a max size (es. 10000 objs) and when i update it set only the 250 position (for 250 onjs for example) and pass this number as the vertexCount to the draw function (numObjs*4), or i'm worng

    Read the article

  • ????! ?????????????????????????????????JavaOne 2012????? ????×????

    - by ???02
    2012?9?30???10?4??4?????????????????????Java??????????????JavaOne 2012??????????????????????2???????????????Make the Future Java????????Java?????????????????????Java??????????????????????????????????????Java??????????????(Fusion Middleware??????)?????JavaOne 2012??????????(???=????[??????IT????]) Make the Future Java?????????????????????????????????? --???JavaOne????????Make the Future Java?????????????????????????... ??:?Java????????????????Java???????????????????????????????????????????????????????????????????????????Java???????????????????????????????????????????????????????????????????????????????????????????????? ?????? Fusion Middleware?????? ???Java?????????????? --???JavaOne????????3????????????????????????????????????? ??:???Java SE?Java EE?Java ME???3?????????????(???)?????????????????????1??????????????????????????????????????????????????????????????????????????????????????????? --????????????????????????????????????????????????????????????????Java EE 7????????????????????????????????? ??:???????????????????????????????????????????????Java????????????????????????????????????????????????????????????????????????? ????????????? ???????????? ????????? ?????????????? ??????????????? ?????????? ???????????????????????????????????????? ?????????/?????????·?????HTML5?????????????????????????????????????????Java??????????? ????????????????????Java?????????????????????????????????JCP(Java Community Process)??????????????????·??????????????????????????????????????·?????????????????????????????????????????????????????????·???????????????????????????Java????????????????????????????????????????????????????? JavaFX?Java???UI????Java SE 8??? JavaOne 2012??????????????IT?????????? --2013???????????????Java SE 8??????2?????????Java SE 9???????????????????????????????????????????JavaScript?????????Java SE 8???????????????????Jigsaw??Java SE 9???????????????????Java SE 8????????????JavaScript?????Nashorn(?????)???????Rhino(????)??????????????????????????????????????????? ??:JavaScript????????JVM?????????????????? ???Web?????????JavaScript?????????????????? ????????????Java???JavaScript??????????????Java SE 7??????InvokeDynamic????????????????????Nashorn??????????????????????????????JVM????????????????????????????????????????????????????????????????????????????????JVM??JavaScript??????????JavaScript???????????????????????????JavaOne?Nashorn????????????????????????????????????????????????????????????????? --Java SE 8??JavaFX 3.0????????????????????? ??:JavaFX??????Java???????????????????Java SE 8??????????????????Java????UI?????AWT?????????Swing??????HTML5????????????Web???????????????????????JavaFX????????????GUI??????????????????????? --???JavaFX?????????????????????????????????????????????????????? ??:????????????????????JavaFX????????????????JavaFX????????·????GUI????????????????????????Visual Basic??????????????????Swing???????????????????????GUI????????? --??????????????????????JavaFX for ARM?????????????? ??:??????ARM????????????????·??????????????????????JavaFX?????????1????????????????????????????JavaFX Scene Builder?Linux??JavaFX SceneBuilder for Linux???????????????????????????????????????????????? Java EE 7??????????????????????Java EE 8?????????????? --Java EE 7?????????????????JavaOne????????????????????????????????????2013?????????????????????????????????????? ??:??????????????Java EE 8????????????????????????????????????????·????????????????????????????? ???????????????????Java???????????????????????????????????????????????????????????????????????????????????????????????????·???????????????????????????????????????????????????????????????????????????????????????????????????????????2013???????????????????????? --????????????????????????????????????????????????????????????? ??:???????????????????????????Java EE 7??HTML5????????????????????????????????????????????JMS(Java Message Service)??????·????1??????????????????????Java EE 7???Java EE 6???????????????????????????CDI(Context Dependency Injection)???????????????????????? ??????Java EE 7????????????????????????Java EE 8??????????????????????????????????????? “Java??”??????·????????? --????????JavaOne??????????????????????? ??:????????????????NetBeans??????????????Project Easel??AMD?OpenJDK??????????????Project Sumatra????????? Easel?NetBeans 7.3????????????HTML5?CSS3?JavaScript?????????????????????????????????????????????????JavaScript?????????????????????? ???Sumatra?Java??GPU?GPU/CPU?????????????????????????????GPU??HotSpot???JVM????????????????????????????/?????????Java?????????????????????????????? --????·???????????Java EE???????JavaScript??????????????????????Project Avatar????????????????? ??:JavaScript?????????????????????????????Avatar????????????????????????????????????????2???????????????????????????????????????????????????Web???????????????????Avatar?????????????????????? --???Java EE??????????????????? ??:???????????????????JavaScript??Java EE?????????????Java????????JavaScript?????????????????????JVM????????????????????????????????????JavaScript????????Java?????????????????????????????????????????????Avatar????JavaScript?????????????????????????????·??????????????????????????????? --?JavaScript?????Nashorn???????????????????JavaScript?????????????????????????????????Avatar???????·???JavaScript????????????????????????“????·??????”????????????????(?) ??:Nahorn?Node.js??????????Java???????????JavaScript??????????????????????????????Java?JavaScript??????????????????????? --????????????????????????????????????????????????? ??:????????????????????????????????????????????????????·???????????????·????????????????????????????T???????????????????????????????????... ???????????! --?????????????·?????????????????????????! JavaOne????????????????????????????????“T?????”?????????????????????????????????????????????T???????????????(?) ??:???Liquid Robotics?????????????????/????????????????????Java?????????????????????????????Java???????????????????????????????JavaOne?????????

    Read the article

  • solve a classic map-reduce problem with opencl?

    - by liuliu
    I am trying to parallel a classic map-reduce problem (which can parallel well with MPI) with OpenCL, namely, the AMD implementation. But the result bothers me. Let me brief about the problem first. There are two type of data that flow into the system: the feature set (30 parameters for each) and the sample set (9000+ dimensions for each). It is a classic map-reduce problem in the sense that I need to calculate the score of every feature on every sample (Map). And then, sum up the overall score for every feature (Reduce). There are around 10k features and 30k samples. I tried different ways to solve the problem. First, I tried to decompose the problem by features. The problem is that the score calculation consists of random memory access (pick some of the 9000+ dimensions and do plus/subtraction calculations). Since I cannot coalesce memory access, it costs. Then, I tried to decompose the problem by samples. The problem is that to sum up overall score, all threads are competing for few score variables. It keeps overwriting the score which turns out to be incorrect. (I cannot carry out individual score first and sum up later because it requires 10k * 30k * 4 bytes). The first method I tried gives me the same performance on i7 860 CPU with 8 threads. However, I don't think the problem is unsolvable: it is remarkably similar to ray tracing problem (for which you carry out calculation that millions of rays against millions of triangles). Any ideas?

    Read the article

  • How to force two process to run on the same CPU?

    - by kovan
    Context: I'm programming a software system that consists of multiple processes. It is programmed in C++ under Linux. and they communicate among them using Linux shared memory. Usually, in software development, is in the final stage when the performance optimization is made. Here I came to a big problem. The software has high performance requirements, but in machines with 4 or 8 CPU cores (usually with more than one CPU), it was only able to use 3 cores, thus wasting 25% of the CPU power in the first ones, and more than 60% in the second ones. After many research, and having discarded mutex and lock contention, I found out that the time was being wasted on shmdt/shmat calls (detach and attach to shared memory segments). After some more research, I found out that these CPUs, which usually are AMD Opteron and Intel Xeon, use a memory system called NUMA, which basically means that each processor has its fast, "local memory", and accessing memory from other CPUs is expensive. After doing some tests, the problem seems to be that the software is designed so that, basically, any process can pass shared memory segments to any other process, and to any thread in them. This seems to kill performance, as process are constantly accessing memory from other processes. Question: Now, the question is, is there any way to force pairs of processes to execute in the same CPU?. I don't mean to force them to execute always in the same processor, as I don't care in which one they are executed, altough that would do the job. Ideally, there would be a way to tell the kernel: If you schedule this process in one processor, you must also schedule this "brother" process (which is the process with which it communicates through shared memory) in that same processor, so that performance is not penalized.

    Read the article

  • Problem with non blocking fifo in bash

    - by timdel
    Hi! I'm running a few Team Fortress 2 servers and I want to write a little management script. Basically the TF2 servers are a fg process which provides a server console, so I can start the server, type status and get an answer from it: ***@purple:~/tf2$ ./start_server_testing Auto detecting CPU Using AMD Optimised binary. Server will auto-restart if there is a crash. Console initialized. [bla bla bla] Connection to Steam servers successful. VAC secure mode is activated. status hostname: Team Fortress version : 1.0.6.1/15 3883 secure udp/ip : ***.***.133.31:27600 map : ctf_2fort at: 0 x, 0 y, 0 z players : 0 (2 max) # userid name uniqueid connected ping loss state adr Great, now I want to create a script which sends the command sm_reloadadmins to all my servers. The best way I found to do this is using a fifo named pipe. Now what I want to do is having this pipe readonly and non blocking to the server process, so I can write into the pipe and the server executes it, but still I want to write via console one the server, so if I switch back to the fg process of the server and I type status I want an answer printed. I tried this (assuming serverfifo is mkfifo serverfifo): ./start_server_testing < serverfifo Not working, the server won't start until something is written to the pipe. ./start_server_testing <> serverfifo Thats actually working pretty good, I can see the console output of the server and I can write to the fifo and the server executes the commands, but I can't write via console to the server anymore. Also, if I write 'exit' to the pipe (which should end the server) and I'm running it in a screen the screen window is getting killed for some reason (wtf why?). I only need the server to read the fifo without blocking AND all my keyboard input on the server itself should be send to the server AND all server ouput should be written to the console. Is that possible? If yes, how?

    Read the article

  • Surprising results with .NET multi-theading algorithm

    - by Myles J
    Hi, I've recently wrote a C# console time tabling algorithm that is based on a combination of a genetic algorithm with a few brute force routines thrown in. The initial results were promising but I figured I could improve the performance by splitting the brute force routines up to run in parallel on multi processor architectures. To do this I used the well documented Producer/Consumer model (as documented in this fantastic article http://www.albahari.com/threading/part2.aspx#_ProducerConsumerQWaitHandle). I changed my code to create one thread per logical processor during the brute force routines. The performance gains on my work station were very pleasing. I am running Windows XP on the following hardware: Intel Core 2 Quad CPU 2.33 GHz 3.49 GB RAM Initial tests indicated average performance gains of approx 40% when using 4 threads. The next step was to deploy the new multi-threading version of the algorithm to our higher spec UAT server. Here is the spec of our UAT server: Windows 2003 Server R2 Enterprise x64 8 cpu (Quad-Core) AMD Opteron 2.70 GHz 255 GB RAM After running the first round of tests we were all extremely surprised to find that the algorithm actually runs slower on the high spec W2003 server than on my local XP work station! In fact the tests seem to indicate that it doesn't matter how many threads are generated (tests were ran with the app spawning between 2 to 32 threads). The algorithm always runs significantly slower on the UAT W2003 server? How could this be? Surely the app should run faster on a 8 cpu (Quad-Core) than my 2 Quad work station? Why are we seeing no performance gains with the multi-threading on the W2003 server whilst the XP workstation tests show gains of up to 40%? Any help or pointers would be appreciated. Regards Myles

    Read the article

  • Show image with link based on first directory with Javascript

    - by Nestorete
    Hi, i need some help please, i want to show an image on my Page depending of the first directory of the URL. Example: In Any of this URLs will show the image1.jpg www.mysite.com/audio/amplifiers/400wats.html www.mysite.com/audio/ www.mysite.com/audio/amplifiers/ In any of this others will show the image2.jpg www.mysite.com/video/spots/40wats.html www.mysite.com/video/amplifiers/400wats.html www.mysite.com/video/lighting/laser.html www.mysite.com/video/laser/ At the moment i can show the image only if the url is only the first directory, bu no in the internal directory or documents. This is the script that i'm using right now: <script type="text/javascript"> switch (location.pathname) { case "/audio/": document.write("From Web<BR>") break case "/video/": document.write('<A HREF="slides.htm" target="_blank"><IMG SRC="/adman/banners/joinvip.gif" WIDTH=728 HEIGHT=90 BORDER=0></A>') break default: document.write('<A HREF="http://www.apple.com" target="_blank"><IMG SRC="http://www.amd.com/us-en/assets/content_type/DownloadableAssets/NEW_PIB_728x90.gif" WIDTH=728 HEIGHT=90 BORDER=0></A>') break } </script> Thank you

    Read the article

  • Show image with link based on first directory with Javascript

    - by Nestorete
    Hi, i need some help please, i want to show an image on my Page depending of the first directory of the URL. Example: In Any of this URLs will show the image1.jpg www.mysite.com/audio/amplifiers/400wats.html www.mysite.com/audio/ www.mysite.com/audio/amplifiers/ In any of this others will show the image2.jpg www.mysite.com/video/spots/40wats.html www.mysite.com/video/amplifiers/400wats.html www.mysite.com/video/lighting/laser.html www.mysite.com/video/laser/ At the moment i can show the image only if the url is only the first directory, bu no in the internal directory or documents. This is the script that i'm using right now: <script type="text/javascript"> switch (location.pathname) { case "/audio/": document.write("From Web<BR>") break case "/video/": document.write('<A HREF="slides.htm" target="_blank"><IMG SRC="/adman/banners/joinvip.gif" WIDTH=728 HEIGHT=90 BORDER=0></A>') break default: document.write('<A HREF="http://www.apple.com" target="_blank"><IMG SRC="http://www.amd.com/us-en/assets/content_type/DownloadableAssets/NEW_PIB_728x90.gif" WIDTH=728 HEIGHT=90 BORDER=0></A>') break } </script> Thank you

    Read the article

  • Error while opening port in Java

    - by Deepak
    I am getting the following error while trying to open the cash drawer. Error loading win32com: java.lang.UnsatisfiedLinkError: C:\Program Files\Java\jdk1.6.0_15\jre\bin\win32com.dll: Can't load IA 32-bit .dll on a AMD 64-bit platform The code i am using is as follows `import javax.comm.; import java.util.; /** Check each port to see if it is open. **/ public class openPort { public static void main (String [] args) { Enumeration port_list = CommPortIdentifier.getPortIdentifiers (); while (port_list.hasMoreElements ()) { // Get the list of ports CommPortIdentifier port_id = (CommPortIdentifier) port_list.nextElement (); // Find each ports type and name if (port_id.getPortType () == CommPortIdentifier.PORT_SERIAL) { System.out.println ("Serial port: " + port_id.getName ()); } else if (port_id.getPortType () == CommPortIdentifier.PORT_PARALLEL) { System.out.println ("Parallel port: " + port_id.getName ()); } else System.out.println ("Other port: " + port_id.getName ()); // Attempt to open it try { CommPort port = port_id.open ("PortListOpen",20); System.out.println (" Opened successfully"); port.close (); } catch (PortInUseException pe) { System.out.println (" Open failed"); String owner_name = port_id.getCurrentOwner (); if (owner_name == null) System.out.println (" Port Owned by unidentified app"); else // The owner name not returned correctly unless it is // a Java program. System.out.println (" " + owner_name); } } } //main } // PortListOpen`

    Read the article

  • Filtering most out of XML with XSL?

    - by Gnudiff
    I need to transform a lot of XML files (Fedora export) into a different kind of XML. Trying to do it with XSL stylesheets and checking with the msxsl transformer. Supposedly I have xml file like this (assuming there are actually other nodes inside AAA, OBJ, amd all other nodes), Source.XML: <DOC> <AAA> <STUFF>example</STUFF> <OBJ> <OBJVERS id="A1" CREATED="2008-02-18T13:28:08.245Z"/> <OBJVERS id="A2" CREATED="2008-02-19T10:42:41.965Z"/> <OBJVERS id="A13" CREATED="2009-03-16T12:43:11.703Z"/> </OBJ> </AAA> <FFF/> <GGG/> <DDD> <FILE /> </DDD> </DOC> Which I need to look something like this (Target.XML): <MYOBJ> <ELEM>contents of OBJVERS with the biggest id OR creation date (whichever is easier to do) go here</ELEM> <IMAGE> contents of <FILE> node go here</IMAGE> </MYOBJ> The main problem that I have is that since I am new to XSL (and for this particular task do not have enough time to learn it properly) is that I can't understand how to tell XSL processor not to process anything else, I keep getting output from , for example. Update: basically, I solved this problem meanwhile. I will post my own answer and close the question. Update2: OK, Andrew's answer works, too, so I am just accepting it. :)

    Read the article

  • Ultra-Portable Laptop or Tablet PC for Development and Sketching

    - by Nelson LaQuet
    I am a software developer that primarily writes in PHP, [X]HTML, CSS, Javascript, C# and C++. I use Eclipse for web development, Visual Studio 2008 for C++ and C# work, TortoiseSVN, Subversion server for local repositories, SQL Server Express, Apache and MYSQL. I also use Office 2007 for word processing and spreadsheets and use Vista Ultimate 64 as my primary operating system. The only other things I do on my laptop are watch movies, surf the internet and listen to music. I currently have a Acer Aspire 5100 (1.4 GHz AMD Turion X2, 2 GB of RAM and a 15.4" screen). This thing does not cut it in performance or portability, and in addition, my DVD drive failed. And before anybody posts about vista: I have had XP Professional 32 on it for the last two years, and recently upgraded to Vista 64. It is actually faster (with areo disabled) then XP; so it is not the OS that is causing the laptop to be slow. I usually sketch a lot, for explaining things, developing user interfaces and software architecture. Because of my requirements, I was thinking about a Lenovo X61 Tablet PC. It outperforms my current laptop, is significantly more portable, and... is a tablet. My question is: do any other software developers use this (or other tablets) for programming? Does it help to be able to sketch on the computer itself? And is it capable of being a good development machine? Will it handle the above software listed? If not, what is the best ultra-portable laptop that is good for programming? Or are ultra-portable laptops even good for programming? I could manage with my 15.4" screen, but am spoiled by my two 19" at my home desktop and my job's workstation.

    Read the article

  • how to fix my error saying expected expression before 'else'

    - by user292489
    this program intended to read a .txt, a set of numbers, file and wwrite to another two .txt files called even amd odd as follows: #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]) { int i=0,even,odd; int number[i]; // check to make sure that all the file names are entered if (argc != 3) { printf("Usage: executable in_file output_file\n"); exit(0); } FILE *dog = fopen(argv[1], "r"); FILE *feven= fopen(argv[2], "w"); FILE *fodd= fopen (argv[3], "w"); // check whether the file has been opened successfully if (dog == NULL) { printf("File %s cannot open!\n", argv[1]); exit(0); } //odd = fopen(argv[2], "w"); { if (i%2!=1) i++;} fprintf(feven, "%d", even); fscanf(dog, "%d", &number[i]); else { i%2==1; i++;} fprintf(fodd, "%d", odd); fscanf(dog, "%d", &number[i]); fclose(feven); fclose(fodd);

    Read the article

  • Is there a IDE/compiler PC benchmark I can use to compare my PCs performance?

    - by RickL
    I'm looking for a benchmark (and results on other PCs) which would give me an idea of the development performance gain I could get by upgrading my PC, also the benchmark could be used to justify the upgrade to my boss. I use Visual Studio 2008 for my development, so I'd like to get an idea of by what factor the build times would be improved, and also it would be good if the benchmark could incorporate IDE performance (i.e. when editing, using intellisense, opening code files etc) into its result. I currently have an AMD 3800x2, with 2GB RAM on Vista 32. For example, I'd like to know what kind of performance gain I'd see in Visual Studio 2008 with a Q6600, 4GB RAM on Vista 64. And also with other processors, and other RAM sizes... also see whether hard disk performance is a big factor. EDIT: I mentioned Vista 64 because I'm aware that Vista 32 can only use 3GB RAM maximum. So I'd presume that wanting to use more RAM would require Vista 64, but perhaps it could still be slower overall there is a large overhead in using the 32 bit VS 2008 on 64 bit OS.

    Read the article

  • best database design for city zip & state tables

    - by ryan a
    My application will need to reference addresses. Street info will be stored with my main objects but the rest needs to be stored seperately to reduce redundancy. How should I store/retrieve ZIPs, cities and states? Here are some of my ideas. single table solution (cant do relationships) [locations] locationID locationParent (FK for locationID - 0 for state entries) locationName (city, state) locationZIP two tables (with relationships, FK constraints, ref integrity) [state] stateID stateName [city] cityID stateID (FK for state.stateID) cityName zipCode three tables [state] stateID stateName [city] cityID stateID (FK for state.stateID) cityName [zip] zipID cityID (FK for city.cityID) zipName Then I read into ZIP codes amd how they are assigned. They aren't specifically related to cities. Some cities have more than one ZIP (ok will still work) but some ZIPs are in more than one city (oh snap) and some other ZIPs (very few) are in more than one state! Also some ZIPs are not even in the same state as the address they belong to at all. Seems ZIPs are made for carrier route identification and some remote places are best served by post offices in neighboring cities or states. Does anybody know of a good (not perfect) solution that takes this into consideration to minimize discrepencies as the database grows?

    Read the article

  • c++ floating point precision loss: 3015/0.00025298219406977296

    - by SigTerm
    The problem. Microsoft Visual C++ 2005 compiler, 32bit windows xp sp3, amd 64 x2 cpu. Code: double a = 3015.0; double b = 0.00025298219406977296; //*((unsigned __int64*)(&a)) == 0x40a78e0000000000 //*((unsigned __int64*)(&b)) == 0x3f30945640000000 double f = a/b;//3015/0.00025298219406977296; the result of calculation (i.e. "f") is 11917835.000000000 (*((unsigned __int64*)(&f)) == 0x4166bb4160000000) although it should be 11917834.814763514 (i.e. *((unsigned __int64*)(&f)) == 0x4166bb415a128aef). I.e. fractional part is lost. Unfortunately, I need fractional part to be correct. Questions: 1) Why does this happen? 2) How can I fix the problem? Additional info: 0) The result is taken directly from "watch" window (it wasn't printed, and I didn't forget to set printing precision). I also provided hex dump of floating point variable, so I'm absolutely sure about calculation result. 1) The disassembly of f = a/b is: fld qword ptr [a] fdiv qword ptr [b] fstp qword ptr [f] 2) f = 3015/0.00025298219406977296; yields correct result (f == 11917834.814763514 , *((unsigned __int64*)(&f)) == 0x4166bb415a128aef ), but it looks like in this case result is simply calculated during compile-time: fld qword ptr [__real@4166bb415a128aef (828EA0h)] fstp qword ptr [f] So, how can I fix this problem? P.S. I've found a temporary workaround (i need only fractional part of division, so I simply use f = fmod(a/b)/b at the moment), but I still would like to know how to fix this problem properly - double precision is supposed to be 16 decimal digits, so such calculation isn't supposed to cause problems.

    Read the article

  • why create CLSID_CaptureGraphBuilder2 instance always failed in a machine

    - by Yigang Wu
    It's a real strange issue, the machine information below is from DXDiag. There is no error reported, but create CLSID_CaptureGraphBuilder2 instance always failed in the machine. It's okay to create CLSID_FilterGraph. Before create CLSID_CaptureGraphBuilder2, I have called CoInitialize and created CLSID_FilterGraph. Only this machine has the error, what dll related with this interface or any function needed to call before to make it work? Thanks in advance. System Information Time of this report: 4/24/2010, 09:46:58 Machine name: TURION Operating System: Windows XP Home Edition (5.1, Build 2600) Service Pack 3 (2600.xpsp_sp3_qfe.100216-1510) Language: Japanese (Regional Setting: Japanese) System Manufacturer: To Be Filled By O.E.M. System Model: MS-7145 BIOS: Default System BIOS Processor: AMD Turion(tm) 64 Mobile Technology MT-30, MMX, 3DNow, ~1.6GHz Memory: 768MB RAM Page File: 376MB used, 1401MB available Windows Dir: C:\WINDOWS DirectX Version: DirectX 9.0c (4.09.0000.0904) DX Setup Parameters: Not found DxDiag Version: 5.03.2600.5512 32bit Unicode DxDiag Notes DirectX Files Tab: No problems found. Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Music Tab: No problems found. Input Tab: No problems found. Network Tab: No problems found.

    Read the article

  • Emulating a computer running MS-DOS

    - by Richard
    Writing emulators has always fascinated me. Now I want to write an emulator for an IBM PC and run MS-DOS on it (I've got the floppy image files). I have good experience in C++ and C and basic knowledge of assembler and the architecture of a CPU. I also know that there are thousands of emulators out there doing exactly what I want to do, but I'd be doing this for pure joy only. How much work do I have to expect? (If my goal is to boot DOS and create a text file with it, all emulated) What CPU should I emulate ? Where can I find documentation on how the machine code is organized and which opcodes mean what, so I can unpack and execute them correctly with my emulator? Does MS-DOS still run on the newest generations of processors? Would it theoretically be able to natively run on a 64-bit AMD Phenom 2 processor w/ a modern mainboard, HDD, RAM, etc.? What else, besides emulating the CPU, could be an important factor (in terms of difficulty)? I would only aim for outputting / inputting text to the system via the host system's console, no sound or other more advanced IO etc. Have you written an emulator yet? What was your first one for? How hard was it? Do you have any special tips for me? Thanks in advance

    Read the article

  • transferring binary files between systems

    - by tim
    Hi guys I'm trying to transfer my files between 2 UNIX clusters, the data is pure numeric (vectors of double) in binary form. Unfortunately one of the systems is IBM ppc997 and the other is AMD Opteron, It seems the format of binary numbers in these systems are different. I have tried 3 ways till now: 1- Changed my files to ASCII format (i.e. saved a number at each line in a text file), sent them to the destination and changed them again to binary on the target system (they both are UNIX, no end of line character difference??!) 2- Sent pure binaries to the destination 3- used uuencode sent them to the destination and decoded them Unfortunately any of these methods does not work (my code in the destination system generates garbage, while it works on the first system, I'm 100% sure the code itself is portable). I don't know what else I can do? Do you have any idea? I'm not a professional, please don't use computer scientists terminology! And: my codes are in C, so by binary I mean a one to one mapping between memory and hard disk. Thanks

    Read the article

  • CPU overheating because of Delphi IDE

    - by Altar
    I am using Delphi 7 but I have trialed the Delphi 2005 - 2010 versions. In all these new versions my CPU utilization is 50% (one core is 100%, the other is "relaxed") when Delphi IDE is visible on screen. It doesn't happen when the IDE is minimized. My computer is overheating because of this. Any hints why this happens? It looks like if I want to upgrade to Delphi 2010, I need to upgrade my cooling system first. And I am a bit lazy about that, especially that I want to discahrge my computer and buy a new one (in the next 6 months) - probably I will have to buy a Win 7 license too. Edit: My CPU is AMD Dual Core 4600+. 4GB RAM Overheating means that temperature of the HDDs is raising because the CPU. Edit: Comming to a possible solution I just deleted all HKCU/CodeGear key and let started Delphi as "new". Guess what? The CPU utilization is 0%. I will investigate more.

    Read the article

  • Aliasing `T*` with `char*` is allowed. Is it also allowed the other way around?

    - by StackedCrooked
    Note: This question has been renamed and reduced to make it more focused and readable. Most of the comments refer to the old text. According to the standard objects of different type may not share the same memory location. So this would not be legal: int i = 0; short * s = reinterpret_cast<short*>(&i); // BAD! The standard however allows an exception to this rule: any object may be accessed through a pointer to char or unsigned char: int i = 0; char * c = reinterpret_cast<char*>(&i); // OK However, it is not clear to me if this is also allowed the other way around. For example: char * c = read_socket(...); unsigned * u = reinterpret_cast<unsigned*>(c); // huh? Summary of the answers The answer is NO for two reasons: You an only access an existing object as char*. There is no object in my sample code, only a byte buffer. The pointer address may not have the right alignment for the target object. In that case dereferencing it would result in undefined behavior. On the Intel and AMD platforms it will result performance overhead. On ARM it will trigger a CPU trap and your program will be terminated! This is a simplified explanation. For more detailed information see answers by @Luc Danton, @Cheers and hth. - Alf and @David Rodríguez.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53  | Next Page >