Search Results

Search found 2467 results on 99 pages for 'bits'.

Page 22/99 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • How do I use LibreOffice's 3d transitions in Impress?

    - by Lvkz
    How can I get the 3D transitions working on Impress? I got a presentation coming soon, and as a requirement of the course the professor want us to use transitions on our "Power Point" chapter, obviously I have been using LibreOffice in every exercise but the native transitions are kind of lame, so when I install the newer version of Ubuntu, always install the extra package to the transitions - I had installed the 3D package: libreoffice-ogltrans 1:3.4.3-3ubuntu2 In previous versions of Ubuntu and worked perfectly, but for some reason is not working in this release. I got LibreOffice 3.4.3, Ubuntu Oneiric Ocelot (11.10) and my hardware is not relevant because I had it working before on previous releases. I know is not critical, but for my class is a pretty important deal, and can be a perfect opportunity to show the class that the cool stuff are not only in Windows. As a recomendation of one of Eliah Kagan, I'm putting the output of: sudo lshw -C video *-display:0 description: VGA compatible controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 07 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:46 memory:f6c00000-f6ffffff memory:e0000000-efffffff ioport:efe8(size=8) *-display:1 UNCLAIMED description: Display controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 07 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:f6b00000-f6bfffff And I'm not using Unity - it don't there anyways -, I'm using instead Gnome Shell.

    Read the article

  • Ubuntu Wireless not working on Lenovo t400

    - by VmaxBoss
    This problem started after upgrading to 12.04, an my system is 'up2date' Have tried most of the solution-proposals found on the net. lspci -nnk | grep -iA2 net 00:19.0 Ethernet controller [0200]: Intel Corporation 82567LF Gigabit Network Connection [8086:10bf] (rev 03) Subsystem: Lenovo Device [17aa:20ee] Kernel driver in use: e1000e 03:00.0 Network controller [0280]: Intel Corporation PRO/Wireless 5100 AGN [Shiloh] Network Connection [8086:4237] Subsystem: Intel Corporation WiFi Link 5100 AGN [8086:1211] Kernel driver in use: iwlagn iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11abgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off sudo lshw -C network *-network description: Ethernet interface product: 82567LF Gigabit Network Connection vendor: Intel Corporation physical id: 19 bus info: pci@0000:00:19.0 logical name: eth0 version: 03 serial: 00:22:68:1a:c4:75 size: 100Mbit/s capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=1.0.2-k2 duplex=full firmware=1.8-3 ip=192.168.2.154 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:29 memory:fc000000-fc01ffff memory:fc024000-fc024fff ioport:1820(size=32) *-network DISABLED description: Wireless interface product: PRO/Wireless 5100 AGN [Shiloh] Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 00 serial: 00:26:c6:6c:2d:24 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn latency=0 multicast=yes wireless=IEEE 802.11abgn resources: irq:30 memory:f4300000-f4301fff Please help Br/VB

    Read the article

  • What is the standard way of using Q15 values?

    - by Alex
    To process 8-bit pixels, to do things like gamma correction without losing information, we normally upsample the values, work in 16 bits or whatever, and then downsample them to 8 bits. Now, this is a somewhat new area for me, so please excuse incorrect terminology etc. For my needs I have chosen to work in "non-standard" Q15, where I only use the upper half of the range (0.0-1.0), and 0x8000 represents 1.0 instead of -1.0. This makes it much easier to calculate things in C. But I ran into a problem with SSSE3. It has the PMULHRSW instruction which multiplies Q15 numbers, but it uses the "standard" range of Q15 is [-1,1-2?¹5], so multplying (my) 0x8000 (1.0) by 0x4000 (0.5) gives 0xC000 (-0.5), because it thinks 0x8000 is -1. This is quite annoying. What am I doing wrong? Should I keep my pixel values in the 0000-7FFF range? This kind of defeats the purpose of it being a fixed-point format. Is there a way around this? Maybe some trick? Is there some kind of definitive treatise on Q15 which discusses all this?

    Read the article

  • ATI 9550 shows up as laptop in displays after update to 12.04, how do I fix this?

    - by D_H
    My guess is this is on here somewhere but I have searched and even tried looking at bunch of other similar video problems. My ATI 9550 shows up as laptop in displays after update to Ubuntu 12.04, how do I fix this? I found the following command on another post sudo lshw -c video. I get this when I run that command: *-display:0 UNCLAIMED description: VGA compatible controller product: RV350 AS [Radeon 9550] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 66MHz capabilities: agp agp-3.0 pm vga_controller bus_master cap_list configuration: latency=32 mingnt=8 resources: memory:c0000000-cfffffff ioport:c000(size=256) memory:e5000000-e500ffff memory:e4000000-e401ffff *-display:1 UNCLAIMED description: Display controller product: RV350 AS [Radeon 9550] (Secondary) vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0.1 bus info: pci@0000:01:00.1 version: 00 width: 32 bits clock: 66MHz capabilities: pm cap_list configuration: latency=32 mingnt=8 resources: memory:d0000000-dfffffff memory:e5010000-e501ffff" This way more info than the command showed in he other post and as far as I can tell right. This doesn't look to me like a laptop video would list? I also see this command xrandr, it reports this: xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1280 x 1024, maximum 1280 x 1024 default connected 1280x1024+0+0 0mm x 0mm 1280x1024 0.0* 1024x768 0.0 800x600 0.0 640x480 0.0 This is what shows in displays for resolutions but only the 1280x1024 works the others produce tearing in the video. Also I should have mentioned 3D mode does not work. I have tried ATI/AMD drivers the new one won't load and older ones won't work. I found out the newer driver no longer supports the 9550.

    Read the article

  • Wireless does not work 12.10

    - by superkoop
    My primary issue is that my wireless does not work after I installed 12.10. The output to rfkill list all: 5: hci0: Bluetooth Soft blocked: no Hard blocked: no The output to lshw -class network is: *-network description: Ethernet interface product: 88E8040 PCI-E Fast Ethernet Controller vendor: Marvell Technology Group Ltd. physical id: 0 bus info: pci@0000:09:00.0 logical name: eth0 version: 12 serial: 00:21:9b:d6:46:51 size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=sky2 driverversion=1.30 duplex=full ip=192.168.1.102 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:44 memory:fe8fc000-fe8fffff ioport:de00(size=256) *-network description: Network controller product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:0b:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=b43-pci-bridge latency=0 resources: irq:17 memory:fe7fc000-fe7fffff The output to lspci -nn for the pertinent information is: 0b:00.0 Network controller [0280]: Broadcom Corporation BCM4312 802.11b/g LP-PHY [14e4:4315] (rev 01) Thus, it seems the solution would be to run: sudo apt-get install linux-headers-generic sudo apt-get install --reinstall bcmwl-kernel-source sudo modprobe wl However, I do not currently have access to an ethernet connection, as I am currently only able to use verizon wireless 3g internet. Thus, is there a way to set up ICS with a Vista machine so that I can access the internet by using the Vista machine as the host? Or, is it possible to fix this by downloading the important packages in vista and moving them to ubuntu via USB drive?

    Read the article

  • Is my graphics card in use or not?

    - by Lindhe94
    I have a Samsung Series 7 NP730U3E which is running Ubuntu Gnome 13.10. This computer have an Intel Core i5 3337U an AMD Radeon HD 8570M on the inside. Ubuntu 13.10 is said to have driver support for this graphics card, but I am not sure whether or not this is the case. When I check System Settings Details it says "Graphics: Intel® Ivybridge Mobile" and lspci | grep VGA returns VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09). But lshw -c video returns *-display description: Display controller product: Mars [Radeon HD 8730M] vendor: Advanced Micro Devices, Inc. [AMD/ATI] physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:47 memory:e0000000-efffffff memory:f7e00000-f7e3ffff ioport:e000(size=256) memory:f7e40000-f7e5ffff *-display description: VGA compatible controller product: 3rd Gen Core processor Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:46 memory:f7800000-f7bfffff memory:d0000000-dfffffff ioport:f000(size=64) What is the case? Is my graphics card is use, or do my laptop have undiscovered powers yet to yield?

    Read the article

  • After upgrade to 12.04 wireless keeps dropping on BCM4312

    - by Sheket
    I know there are plenty of questions very similar to these, but I've tried practically everything and it still isn't working. Some solutions get the wireless connection working, but it goes very slow and drops after a few minutes. Then it won't reconnect and keeps asking for password. Hope you can help me. Thanks in advance. This is the output for sudo lshw -C network *-network description: Network controller product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=b43-pci-bridge latency=0 resources: irq:18 memory:f0300000-f0303fff *-network description: Ethernet interface product: AR8132 Fast Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:05:00.0 logical name: eth1 version: c0 serial: 00:23:5a:9b:6e:b1 size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI duplex=full firmware=N/A ip=192.168.0.106 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:42 memory:f0200000-f023ffff ioport:a000(size=128) *-network description: Wireless interface physical id: 1 logical name: wlan0 serial: 00:24:2c:83:f0:81 capabilities: ethernet physical wireless configuration: broadcast=yes driver=b43 driverversion=3.2.0-30-generic firmware=478.104 link=no multicast=yes wireless=IEEE 802.11bg And for lsmod | grep b43 b43 342643 0 mac80211 436455 1 b43 cfg80211 178679 2 b43,mac80211 bcma 25651 1 b43 ssb 50691 1 b43 And for rfkill list 5: phy0: Wireless LAN Soft blocked: no Hard blocked: no

    Read the article

  • No WIFI on Ubuntu 12.04 LTS after today software update

    - by Adchara
    I just got new dell inspiron3537 with Ubuntu 12.04 LTS (no windows O/S). It got some wireless (hard block) yesterday. So, this morning, I ran the software update all the security update. After that I can't see "Wireless" in system setting. So, I updated all the software update and looked thru several web site and found "sudo lshw -c network" command. I tried and found the result below. *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:01:00.0 logical name: eth0 version: 07 serial: 74:86:7a:40:5d:48 size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full ip=192.168.1.10 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:60 ioport:4000(size=256) memory:c0700000-c0700fff memory:c0400000-c0403fff *-network UNCLAIMED description: Network controller product: QCA9565 / AR9565 Wireless Network Adapter vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:c0600000-c067ffff memory:9fb00000-9fb0ffff Please suggest what I should do to fix it. Thx in advance

    Read the article

  • Video quality too bad while playing (any) videos in Intel GM965/GL960 Integrated Graphics Controller Ubuntu 12.04

    - by Sukhdev
    I have searched blogs and forums, installed several drivers, but can't find a solution that can provide equivalent video quality as that of Windows 7. Kindly help. Video quality specially color is too bad while playing with any media player. Configuration details are: Ubuntu - 12.04 Intel Corporation Mobile GM965/GL960 Integrated The results of the following commands are a) sudo lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (primary) (rev 0c) b) find /dev -group video /dev/fb0 /dev/dri/card0 /dev/dri/controlD64 /dev/agpgart c) glxinfo | grep -i vendor server glx vendor string: SGI client glx vendor string: ATI OpenGL vendor string: Tungsten Graphics, Inc d) sudo lshw -C video *-display:0 description: VGA compatible controller product: Mobile GM965/GL960 Integrated Graphics Controller (primary) vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 0c width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:44 memory:fea00000-feafffff memory:e0000000-efffffff ioport:efe8(size=8) *-display:1 UNCLAIMED description: Display controller product: Mobile GM965/GL960 Integrated Graphics Controller (secondary) vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 0c width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:feb00000-febfffff I have spent days installing various drivers, and then un-installing but can't come up with a solution. Please help.

    Read the article

  • deadlocks in the innodb status

    - by shantanuo
    Mysql sever has suddenly become very slow. There are no queries in the slow query log but the innodb status shows something like the following. Does it mean that it is due to innodb deadlock? if Yes, what is the way out? *************************** 1. row *************************** Status: ===================================== 100315 12:55:29 INNODB MONITOR OUTPUT ===================================== Per second averages calculated from the last 5 seconds ---------- SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 187532, signal count 188120 Mutex spin waits 0, rounds 61908654, OS waits 33052 RW-shared spins 89241, OS waits 41948; RW-excl spins 5857, OS waits 1557 ------------------------ LATEST DETECTED DEADLOCK ------------------------ 100315 12:43:02 *** (1) TRANSACTION: TRANSACTION 0 56996536, ACTIVE 0 sec, process no 5000, OS thread id 3031395216 starting index read mysql tables in use 1, locked 1 LOCK WAIT 6 lock struct(s), heap size 1024, undo log entries 6 MySQL thread id 994, query id 7699751 localhost application Searching rows for update UPDATE QUERY *** (1) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 0 page no 4073 n bits 296 index `PRIMARY` of table `dbII/tbl_ticket_block_master` trx id 0 56996536 lock_mode X locks r ec but not gap waiting Record lock, heap no 141 PHYSICAL RECORD: n_fields 23; compact format; info bits 0 0: len 7; hex 33353837393936; asc 3587996;; 1: len 4; hex 800001f4; asc ;; 2: len 1; hex 47; asc G;; 3: len 2; hex 6f6b; asc ok;; 4: le n 6; hex 0000035957fe; asc YW ;; 5: len 7; hex 000000401737c0; asc @ 7 ;; 6: SQL NULL; 7: SQL NULL; 8: SQL NULL; 9: len 3; hex 8fb46e; asc n;; 10: SQL NULL; 11: len 1; hex 30; asc 0;; 12: len 0; hex ; asc ;; 13: SQL NULL; 14: len 1; hex 33; asc 3;; 15: len 4; hex 4b9ceebe ; asc K ;; 16: len 1; hex 30; asc 0;; 17: len 4; hex 80006ae8; asc j ;; 18: len 0; hex ; asc ;; 19: len 0; hex ; asc ;; 20: len 0; hex ; asc ;; 21: len 0; hex ; asc ;; 22: len 0; hex ; asc ;; *** (2) TRANSACTION: TRANSACTION 0 56996527, ACTIVE 0 sec, process no 5000, OS thread id 2961476496 fetching rows, thread declared inside InnoDB 237 mysql tables in use 3, locked 3 121 lock struct(s), heap size 11584, undo log entries 16 MySQL thread id 995, query id 7699729 localhost application Searching rows for update UPDATE QUERY *** (2) HOLDS THE LOCK(S): RECORD LOCKS space id 0 page no 4073 n bits 296 index `PRIMARY` of table `DBII/tbl_ticket_block_master` trx id 0 56996527 lock_mode X Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0 0: len 8; hex 73757072656d756d; asc supremum;; Record lock, heap no 2 PHYSICAL RECORD: n_fields 23; compact format; info bits 0 0: len 7; hex 33353837343631; asc 3587461;; 1: len 4; hex 800001f4; asc ;; 2: len 1; hex 47; asc G;; 3: len 6; hex 497373756564; asc Is sued;; 4: len 6; hex 000003425295; asc BR ;; 5: len 7; hex 8000000464012c; asc d ,;; 6: SQL NULL; 7: len 4; hex 80000058; asc X;; 8: len 1; hex 43; asc C;; 9: len 3; hex 8fb465; asc e;; 10: len 3; hex 8fb46d; asc m;; 11: len 1; hex 30; asc 0;; 12: len 0; hex ; asc ; ; 13: SQL NULL; 14: len 1; hex 33; asc 3;; 15: len 4; hex 4b9b33a2; asc K 3 ;; 16: len 3; hex 756d67; asc umg;; 17: len 4; hex 80006744; asc gD;; 18: len 0; hex ; asc ;; 19: len 0; hex ; asc ;; 20: len 0; hex ; asc ;; 21: len 0; hex ; asc ;; 22: len 0; hex ; asc ;;

    Read the article

  • Running Jetty under Windows Azure Using RoleEntryPoint in a Worker Role

    - by Shawn Cicoria
    This post is built upon the work of Mario Kosmiskas and David C. Chou’s prior postings – from here: http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx  http://blogs.msdn.com/b/dachou/archive/2010/03/21/run-java-with-jetty-in-windows-azure.aspx As Mario points out in his post, when you need to have more control over the process that starts, it generally is better left to a RoleEntryPoint capability that as of now, requires the use of a CLR based assembly that is deployed as part of the package to Azure. There were things I liked especially about Mario’s post – specifically, the ability to pull down the JRE and Jetty runtimes at role startup and instantiate the process using the extracted bits.  The way Mario initialized the java process (and Jetty) was to take advantage of a role startup task configured as part of the service definition.  This is a great quick way to kick off processes or tasks prior to your role entry point.  However, if you need access to service configuration values or role events, that’s where RoleEntryPoint comes in.  For this PoC sample I moved the logic for retrieving the bits for the jre and jetty to the worker roles OnStart – in addition to moving the process kickoff to the OnStart method.  The Run method at this point is there to loop and just report the status of the java process. Beyond just making things more parameterized, both Mario’s and David’s articles still form the essence of the approach. The solution that accompanies this post provides all the necessary .NET based Visual Studio project.  In addition, you’ll need: 1. Jetty 7 runtime http://www.eclipse.org/jetty/downloads.php 2. JRE http://www.oracle.com/technetwork/java/javase/downloads/index.html Once you have these the first step is to create archives (zips) of the distributions.  For this PoC, the structure of the archive requires that the root of the archive looks as follows: JRE6.zip jetty---.zip Upload the contents to a storage container (block blob), and for this example I used /archives as the location.  The service configuration has several settings that allow, which is the advantage of using RoleEntryPoint, the ability to provide these things via native configuration support from Azure in a worker role. Storage Explorer You can use development storage for testing this out – the zipped version of the solution is configured for development storage.  When you’re ready to deploy, you update the two settings – 1 for diagnostics and the other for the storage container where the /archives are going to be stored. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="HostedJetty" osFamily="2" osVersion="*"> <Role name="JettyWorker"> <Instances count="1" /> <ConfigurationSettings> <!--<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>" />--> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="JettyArchive" value="jetty-distribution-7.3.0.v20110203b.zip" /> <Setting name="StartRole" value="true" /> <Setting name="BlobContainer" value="archives" /> <Setting name="JreArchive" value="jre6.zip" /> <!--<Setting name="StorageCredentials" value="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>"/>--> <Setting name="StorageCredentials" value="UseDevelopmentStorage=true" />   For interacting with Storage you can use several tools – one tool that I like is from the Windows Azure CAT team located here: http://appfabriccat.com/2011/02/exploring-windows-azure-storage-apis-by-building-a-storage-explorer-application/  and shown in the prior picture At runtime, during role initialization and startup, Azure will call into your RoleEntryPoint.  At that time the code will do a dynamic pull of the 2 archives and extract – using the Sharp Zip Lib <link> as Mario had demonstrated in his sample.  The only different here is the use of CLR code vs. PowerShell (which is really CLR, but that’s another discussion). At this point, once the 2 zips are extracted, the Role’s file system looks as follows: Worker Role approot From there, the OnStart method (which also does the download and unzip using a simple StorageHelper class) kicks off the Java path and now you have Java! Task Manager Jetty Sample Page A couple of things I’m working on to enhance this is to extract the jre and jetty bits not to the appRoot but to a resource location defined as part of the service definition. ServiceDefinition.csdef <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="HostedJetty" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="JettyWorker"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> <Endpoints> <InputEndpoint name="JettyPort" protocol="tcp" port="80" localPort="8080" /> </Endpoints> <LocalResources> <LocalStorage name="Archives" cleanOnRoleRecycle="false" sizeInMB="100" /> </LocalResources>   As the concept matures a bit, being able to update dynamically the content or jar files as part of a running java solution is something that is possible through continued enhancement of this simple model. The Visual Studio 2010 Solution is located here: HostingJavaSln_NDA.zip

    Read the article

  • Some non-generic collections

    - by Simon Cooper
    Although the collections classes introduced in .NET 2, 3.5 and 4 cover most scenarios, there are still some .NET 1 collections that don't have generic counterparts. In this post, I'll be examining what they do, why you might use them, and some things you'll need to bear in mind when doing so. BitArray System.Collections.BitArray is conceptually the same as a List<bool>, but whereas List<bool> stores each boolean in a single byte (as that's what the backing bool[] does), BitArray uses a single bit to store each value, and uses various bitmasks to access each bit individually. This means that BitArray is eight times smaller than a List<bool>. Furthermore, BitArray has some useful functions for bitmasks, like And, Xor and Not, and it's not limited to 32 or 64 bits; a BitArray can hold as many bits as you need. However, it's not all roses and kittens. There are some fundamental limitations you have to bear in mind when using BitArray: It's a non-generic collection. The enumerator returns object (a boxed boolean), rather than an unboxed bool. This means that if you do this: foreach (bool b in bitArray) { ... } Every single boolean value will be boxed, then unboxed. And if you do this: foreach (var b in bitArray) { ... } you'll have to manually unbox b on every iteration, as it'll come out of the enumerator an object. Instead, you should manually iterate over the collection using a for loop: for (int i=0; i<bitArray.Length; i++) { bool b = bitArray[i]; ... } Following on from that, if you want to use BitArray in the context of an IEnumerable<bool>, ICollection<bool> or IList<bool>, you'll need to write a wrapper class, or use the Enumerable.Cast<bool> extension method (although Cast would box and unbox every value you get out of it). There is no Add or Remove method. You specify the number of bits you need in the constructor, and that's what you get. You can change the length yourself using the Length property setter though. It doesn't implement IList. Although not really important if you're writing a generic wrapper around it, it is something to bear in mind if you're using it with pre-generic code. However, if you use BitArray carefully, it can provide significant gains over a List<bool> for functionality and efficiency of space. OrderedDictionary System.Collections.Specialized.OrderedDictionary does exactly what you would expect - it's an IDictionary that maintains items in the order they are added. It does this by storing key/value pairs in a Hashtable (to get O(1) key lookup) and an ArrayList (to maintain the order). You can access values by key or index, and insert or remove items at a particular index. The enumerator returns items in index order. However, the Keys and Values properties return ICollection, not IList, as you might expect; CopyTo doesn't maintain the same ordering, as it copies from the backing Hashtable, not ArrayList; and any operations that insert or remove items from the middle of the collection are O(n), just like a normal list. In short; don't use this class. If you need some sort of ordered dictionary, it would be better to write your own generic dictionary combining a Dictionary<TKey, TValue> and List<KeyValuePair<TKey, TValue>> or List<TKey> for your specific situation. ListDictionary and HybridDictionary To look at why you might want to use ListDictionary or HybridDictionary, we need to examine the performance of these dictionaries compared to Hashtable and Dictionary<object, object>. For this test, I added n items to each collection, then randomly accessed n/2 items: So, what's going on here? Well, ListDictionary is implemented as a linked list of key/value pairs; all operations on the dictionary require an O(n) search through the list. However, for small n, the constant factor that big-o notation doesn't measure is much lower than the hashing overhead of Hashtable or Dictionary. HybridDictionary combines a Hashtable and ListDictionary; for small n, it uses a backing ListDictionary, but switches to a Hashtable when it gets to 9 items (you can see the point it switches from a ListDictionary to Hashtable in the graph). Apart from that, it's got very similar performance to Hashtable. So why would you want to use either of these? In short, you wouldn't. Any gain in performance by using ListDictionary over Dictionary<TKey, TValue> would be offset by the generic dictionary not having to cast or box the items you store, something the graphs above don't measure. Only if the performance of the dictionary is vital, the dictionary will hold less than 30 items, and you don't need type safety, would you use ListDictionary over the generic Dictionary. And even then, there's probably more useful performance gains you can make elsewhere.

    Read the article

  • The Joy Of Hex

    - by Jim Giercyk
    While working on a mainframe integration project, it occurred to me that some basic computer concepts are slipping into obscurity. For example, just about anyone can tell you that a 64-bit processor is faster than a 32-bit processer. A grade school child could tell you that a computer “speaks” in ‘1’s and ‘0’s. Some people can even tell you that there are 8 bits in a byte. However, I have found that even the most seasoned developers often can’t explain the theory behind those statements. That is not a knock on programmers; in the age of IntelliSense, what reason do we have to work with data at the bit level? Many computer theory classes treat bit-level programming as a thing of the past, no longer necessary now that storage space is plentiful. The trouble with that mindset is that the world is full of legacy systems that run programs written in the 1970’s.  Today our jobs require us to extract data from those systems, regardless of the format, and that often involves low-level programming. Because it seems knowledge of the low-level concepts is waning in recent times, I thought a review would be in order.       CHARACTER: See Spot Run HEX: 53 65 65 20 53 70 6F 74 20 52 75 6E DECIMAL: 83 101 101 32 83 112 111 116 32 82 117 110 BINARY: 01010011 01100101 01100101 00100000 01010011 01110000 01101111 01110100 00100000 01010010 01110101 01101110 In this example, I have broken down the words “See Spot Run” to a level computers can understand – machine language.     CHARACTER:  The character level is what is rendered by the computer.  A “Character Set” or “Code Page” contains 256 characters, both printable and unprintable.  Each character represents 1 BYTE of data.  For example, the character string “See Spot Run” is 12 Bytes long, exclusive of the quotation marks.  Remember, a SPACE is an unprintable character, but it still requires a byte.  In the example I have used the default Windows character set, ASCII, which you can see here:  http://www.asciitable.com/ HEX:  Hex is short for hexadecimal, or Base 16.  Humans are comfortable thinking in base ten, perhaps because they have 10 fingers and 10 toes; fingers and toes are called digits, so it’s not much of a stretch.  Computers think in Base 16, with numeric values ranging from zero to fifteen, or 0 – F.  Each decimal place has a possible 16 values as opposed to a possible 10 values in base 10.  Therefore, the number 10 in Hex is equal to the number 16 in Decimal.  DECIMAL:  The Decimal conversion is strictly for us humans to use for calculations and conversions.  It is much easier for us humans to calculate that [30 – 10 = 20] in decimal than it is for us to calculate [1E – A = 14] in Hex.  In the old days, an error in a program could be found by determining the displacement from the entry point of a module.  Since those values were dumped from the computers head, they were in hex. A programmer needed to convert them to decimal, do the equation and convert back to hex.  This gets into relative and absolute addressing, a topic for another day.  BINARY:  Binary, or machine code, is where any value can be expressed in 1s and 0s.  It is really Base 2, because each decimal place can have a possibility of only 2 characters, a 1 or a 0.  In Binary, the number 10 is equal to the number 2 in decimal. Why only 1s and 0s?  Very simply, computers are made up of lots and lots of transistors which at any given moment can be ON ( 1 ) or OFF ( 0 ).  Each transistor is a bit, and the order that the transistors fire (or not fire) is what distinguishes one value from  another in the computers head (or CPU).  Consider 32 bit vs 64 bit processing…..a 64 bit processor has the capability to read 64 transistors at a time.  A 32 bit processor can only read half as many at a time, so in theory the 64 bit processor should be much faster.  There are many more factors involved in CPU performance, but that is the fundamental difference.    DECIMAL HEX BINARY 0 0 0000 1 1 0001 2 2 0010 3 3 0011 4 4 0100 5 5 0101 6 6 0110 7 7 0111 8 8 1000 9 9 1001 10 A 1010 11 B 1011 12 C 1100 13 D 1101 14 E 1110 15 F 1111   Remember that each character is a BYTE, there are 2 HEX characters in a byte (called nibbles) and 8 BITS in a byte.  I hope you enjoyed reading about the theory of data processing.  This is just a high-level explanation, and there is much more to be learned.  It is safe to say that, no matter how advanced our programming languages and visual studios become, they are nothing more than a way to interpret bits and bytes.  There is nothing like the joy of hex to get the mind racing.

    Read the article

  • Is there a good way to convert between BitmapSource and Bitmap?

    - by JohannesH
    As far as I can tell the only way to convert from BitmapSource to Bitmap is through unsafe code... Like this (from Lesters WPF blog): myBitmapSource.CopyPixels(bits, stride, 0); unsafe { fixed (byte* pBits = bits) { IntPtr ptr = new IntPtr(pBits); System.Drawing.Bitmap bitmap = new System.Drawing.Bitmap( width, height, stride, System.Drawing.Imaging.PixelFormat.Format32bppPArgb,ptr); return bitmap; } } To do the reverse: System.Windows.Media.Imaging.BitmapSource bitmapSource = System.Windows.Interop.Imaging.CreateBitmapSourceFromHBitmap( bitmap.GetHbitmap(), IntPtr.Zero, Int32Rect.Empty, System.Windows.Media.Imaging.BitmapSizeOptions.FromEmptyOptions()); Is there an easier way in the framework? And what is the reason it isn't in there (if it's not)? I would think it's fairly usable. The reason I need it is because I use AForge to do certain image operations in an WPF app. WPF wants to show BitmapSource/ImageSource but AForge works on Bitmaps.

    Read the article

  • *UPDATED* help with django and accented characters?

    - by Asinox
    Hi guys, i have a problem with my accented characters, Django admin save my data without encoding to something like "&aacute;" Example: if im trying a word like " Canción ", i would like to save in this way: Canci&oacute;n, and not Canción. im usign Sociable app: {% load sociable_tags %} {% get_sociable Facebook TwitThis Google MySpace del.icio.us YahooBuzz Live as sociable_links with url=object.get_absolute_url title=object.titulo %} {% for link in sociable_links %} <a href="{{ link.link }}"><img alt="{{ link.site }}" title="{{ link.site }}" src="{{ link.image }}" /></a> {% endfor %} But im getting error if my object.titulo (title of the article) have a accented word. aught KeyError while rendering: u'\xfa' Any idea ? i had in my SETTING: DEFAULT_CHARSET = 'utf-8' i had in my mysql database: utf8_general_ci COMPLETED ERROR: Traceback: File "C:\wamp\bin\Python26\lib\site-packages\django\core\handlers\base.py" in get_response 100. response = callback(request, *callback_args, **callback_kwargs) File "C:\wamp\bin\Python26\lib\site-packages\django\views\generic\date_based.py" in object_detail 366. response = HttpResponse(t.render(c), mimetype=mimetype) File "C:\wamp\bin\Python26\lib\site-packages\django\template\__init__.py" in render 173. return self._render(context) File "C:\wamp\bin\Python26\lib\site-packages\django\template\__init__.py" in _render 167. return self.nodelist.render(context) File "C:\wamp\bin\Python26\lib\site-packages\django\template\__init__.py" in render 796. bits.append(self.render_node(node, context)) File "C:\wamp\bin\Python26\lib\site-packages\django\template\debug.py" in render_node 72. result = node.render(context) File "C:\wamp\bin\Python26\lib\site-packages\django\template\loader_tags.py" in render 125. return compiled_parent._render(context) File "C:\wamp\bin\Python26\lib\site-packages\django\template\__init__.py" in _render 167. return self.nodelist.render(context) File "C:\wamp\bin\Python26\lib\site-packages\django\template\__init__.py" in render 796. bits.append(self.render_node(node, context)) File "C:\wamp\bin\Python26\lib\site-packages\django\template\debug.py" in render_node 72. result = node.render(context) File "C:\wamp\bin\Python26\lib\site-packages\django\template\loader_tags.py" in render 62. result = block.nodelist.render(context) File "C:\wamp\bin\Python26\lib\site-packages\django\template\__init__.py" in render 796. bits.append(self.render_node(node, context)) File "C:\wamp\bin\Python26\lib\site-packages\django\template\debug.py" in render_node 72. result = node.render(context) File "C:\wamp\bin\Python26\lib\site-packages\sociable\templatetags\sociable_tags.py" in render 37. 'link': sociable.genlink(site, **self.values), File "C:\wamp\bin\Python26\lib\site-packages\sociable\sociable.py" in genlink 20. values['title'] = quote_plus(kwargs['title']) File "C:\wamp\bin\Python26\lib\urllib.py" in quote_plus 1228. s = quote(s, safe + ' ') File "C:\wamp\bin\Python26\lib\urllib.py" in quote 1222. res = map(safe_map.__getitem__, s) Exception Type: TemplateSyntaxError at /noticia/2010/jun/10/matan-domingo-paquete-en-la-avenida-san-vicente-de-paul/ Exception Value: Caught KeyError while rendering: u'\xfa' thanks, sorry with my English

    Read the article

  • CGBitmapContextCreate: unsupported parameter combination

    - by tarmes
    I'm getting this error when creating a bitmap context: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 24 bits/pixel; 3-component color space; kCGImageAlphaNone; 7936 bytes/row. Here's the code (note that the context is based on the parameters of an existing CGImage: context = CGBitmapContextCreate(NULL, (int)pi.bufferSizeRequired.width, (int)pi.bufferSizeRequired.height, CGImageGetBitsPerComponent(imageRef), 0, CGImageGetColorSpace(imageRef), CGImageGetBitmapInfo(imageRef)); Width is 2626, height is 3981. I've leaving bytesPerRow at zero so that it gets calculated automatically for me, and it's chosen 7936 of its own accord. So, where on Earth is the inconsistency? It's driving me nuts.

    Read the article

  • binary protocols v. text protocols

    - by der_grosse
    does anyone have a good definition for what a binary protocol is? and what is a text protocol actually? how do these compare to each other in terms of bits sent on the wire? here's what wikipedia says about binary protocols: A binary protocol is a protocol which is intended or expected to be read by a machine rather than a human being (http://en.wikipedia.org/wiki/Binary_protocol) oh come on! to be more clear, if I have jpg file how would that be sent through a binary protocol and how through a text one? in terms of bits/bytes sent on the wire of course. at the end of the day if you look at a string it is itself an array of bytes so the distinction between the 2 protocols should rest on what actual data is being sent on the wire. in other words, on how the initial data (jpg file) is encoded before being sent. any coments are apprecited, I am trying to get to the essence of things here. salutations!

    Read the article

  • How to use boost::crc?

    - by Andreas Bonini
    I want to use boost::crc so that it works exactly like PHP's crc32() function. I tried reading the horrible documentation and many headaches later I haven't made any progress. Apparently I have to do something like: int GetCrc32(const string& my_string) { return crc_32 = boost::crc<bits, TruncPoly, InitRem, FinalXor, ReflectIn, ReflectRem>(my_string.c_str(), my_string.length()); } bits should be 32.. What the other things are is a mystery. A little help? ;)

    Read the article

  • iPhone: Changing CGImageAlphaInfo of CGImage

    - by TechZen
    I have a PNG image that has an unsupported bitmap graphics context pixel format. Whenever I attempt to resize the image, CGBitmapContextCreate() chokes on the unsupported format (Error formatted for easy reading): CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component colorspace; kCGImageAlphaLast; 1344 bytes/row. The list of supported pixel formats definitely does not support this combination. It appears I need to redraw the image and move the alpha channel information to kCGImageAlphaPremultipliedFirst or kCGImageAlphaPremultipliedLast. I have no idea how to go about doing this. There is nothing unusual about the PNG file and it isn't corrupted. It works in all other context just fine. I encountered this error just by chance but obviously my users might have similarly formatted files so I will have to check my app's imported images and correct for this problem.

    Read the article

  • Fastest way to calculate an X-bit bitmask?

    - by Virtlink
    I have been trying to solve this problem for a while, but couldn't with just integer arithmetic and bitwise operators. However, I think its possible and it should be fairly easy. What am I missing? The problem: to get an integer value of arbitrary length (this is not relevant to the problem) with it's X least significant bits sets to 1 and the rest to 0. For example, given the number 31, I need to get an integer value which equals 0x7FFFFFFF (31 least significant bits are 1 and the rest zeros). Of course, using a loop OR-ing a shifted 1 to an integer X times will do the job. But that's not the solution I'm looking for. It should be more in the direction of (X << Y - 1), thus using no loops.

    Read the article

  • Working with bytes and binary data in Python

    - by ignoramus
    Four consecutive bytes in a byte string together specify some value. However, only 7 bits in each byte are used; the most significant bit is ignored (that makes 28 bits altogether). So... b"\x00\x00\x02\x01" would be 000 0000 000 0000 000 0010 000 0001. Or, for the sake of legibility, 10 000 0001. That's the value the four bytes represent. But I want a decimal, so I do this: >>> 0b100000001 257 I can work all that out myself, but how would I incorporate it into a program?

    Read the article

  • Variable-byte encoding clarification

    - by Myx
    Hello: I am very new to the world of byte encoding so please excuse me (and by all means, correct me) if I am using/expressing simple concepts in the wrong way. I am trying to understand variable-byte encoding. I have read the Wikipedia article (http://en.wikipedia.org/wiki/Variable-width_encoding) as well as a book chapter from an Information Retrieval textbook. I think I understand how to encode a decimal integer. For example, if I wanted to provide variable-byte encoding for the integer 60, I would have the following result: 1 0 1 1 1 1 0 0 (please let me know if the above is incorrect). If I understand the scheme, then I'm not completely sure how the information is compressed. Is it because usually we would use 32 bits to represent an integer, so that representing 60 would result in 1 1 1 1 0 0 preceded by 26 zeros, thus wasting that space as opposed to representing it with just 8 bits instead? Thank you in advance for the clarifications.

    Read the article

  • Size of int in C on different architectures

    - by NawaMan
    I am aware that the specification of the C language does not dictate the exact size of each integer type (e.g., int). What I am wondering is: Is there a way in C (not C++) to define an integer type with a specific size that ensures it will be the same across different architectures? Like: typedef int8 <an integer with 8 bits> typedef int16 <an integer with 16 bits> Or any other way that will allow other parts of the program to be compiled on different architecture.

    Read the article

  • Rails ActiveRecord BigNum to JSON

    - by Jon Hoffman
    Hi, I am serializing an ActiveRecord model in rails 2.3.2 to_json and have noticed that BigNum values are serialized to JSON without quotes, however, javascript uses 64 bits to represent large numbers and only ~52(?) of those bits are available for the integer part, the rest are for the exponent. So my 17 digit numbers become rounded off, grrr. Try the following in the Firebug console: console.log(123456789012345678) So, I'm thinking that the json encoder should be smart enough to quote numbers that are too big for the javascript engines to handle. How do I fix up rails to do that? Or, is there a way to override the encoding for a single property on the model (I don't want to_s elsewhere)? Thanks.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >