Search Results

Search found 19425 results on 777 pages for 'output clause'.

Page 92/777 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Entity Framework 5 upgrade from 4

    - by user1714591
    I'm having an issue with the Where clause in a search, in my original version EF4 I could add a Where clause with 2 parameters, the where clause (string predicate) and a ObjectParameter list such as var query = context.entities.Where(WhereClause.ToString(), Params.ToArray()); since my upgrade to EF5 I don't seem to have that option am I missing something? This was originally used to build dynamic where clause such as "it.entity_id = @entity_id" then holding the variable value in the ObjectParameter. I'm hoping I don't have to rewrite all the searches that have been built out this way, so any assistance would be greatly appreciated. Cheers

    Read the article

  • How can I close the output stream after a jsp has been included.

    - by stu
    I have a webpage that makes an ajax call to get some data. That data takes a long time to calculate, so what I did was the first ajax server call returns "loading..." and then the thread goes on to calculate the data and store it in a cache. meanwhile the client javascript checks back every few seconds with another ajax call to see if the cache has been loaded yet. Here's my problem, and it might not be a problem. After the initial ajax to the server call, I do a ...getServletContext().getRequestDispatcher(jsppath).include(request, response); then I use that thread to do the calculations. I don't mind tying up the webserver thread with this, but I want the browser to get the response and not wait for the server to close the socket. I can't tell if the server is closing the socket after the include, but I'm guessing it's not. So how can I forcibly close the stream after I've written out my response, before starting my long calculations? I tried o = response.getOutputStream(); o.close(); but I get an illegal state exception saying that the output stream has already been gotten (presumably by the jsp I'm including) So my qestions: 1) is the webserver closing the socket (I'm guessing not, because you could always include another jsp) 2) if it is as I assume not closing the socket, how do I do that?

    Read the article

  • Minimizing MySQL output with Compress() and by concatening results?

    - by johnrl
    Hi all. It is crucial that I transfer the least amount of data possible between server and client. Therefore I thought of using the mysql Compress() function. To get the max compression I also want to concatenate all my results in one large string (or several of max length allowed by MySql), to allow for similar results to be compressed, and then compress these/that string. 1st problem (concatenating mysql results): SELECT name,age FROM users returns 10 results. I want to concatenate all these results in one strign on the form: name,age,name,age,name,age... and so on. Is this possible? 2nd problem (compressing the results from above) When I have comstructed the concatenated string as above I want to compress it. If I do: SELECT COMPRESS('myname'); then it just gives me as output the character '-' - sometimes it even returns unprintable characters. How do I get COMPRESS() to return a compressed printable string that I can trasnfer in ex ASCII encoding?

    Read the article

  • SQL Spatial: Getting “nearest” calculations working properly

    - by Rob Farley
    If you’ve ever done spatial work with SQL Server, I hope you’ve come across the ‘nearest’ problem. You have five thousand stores around the world, and you want to identify the one that’s closest to a particular place. Maybe you want the store closest to the LobsterPot office in Adelaide, at -34.925806, 138.605073. Or our new US office, at 42.524929, -87.858244. Or maybe both! You know how to do this. You don’t want to use an aggregate MIN or MAX, because you want the whole row, telling you which store it is. You want to use TOP, and if you want to find the closest store for multiple locations, you use APPLY. Let’s do this (but I’m going to use addresses in AdventureWorks2012, as I don’t have a list of stores). Oh, and before I do, let’s make sure we have a spatial index in place. I’m going to use the default options. CREATE SPATIAL INDEX spin_Address ON Person.Address(SpatialLocation); And my actual query: WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Great! This is definitely working. I know both those City locations, even if the AddressLine1s don’t quite ring a bell. I’m sure I’ll be able to find them next time I’m in the area. But of course what I’m concerned about from a querying perspective is what’s happened behind the scenes – the execution plan. This isn’t pretty. It’s not using my index. It’s sucking every row out of the Address table TWICE (which sucks), and then it’s sorting them by the distance to find the smallest one. It’s not pretty, and it takes a while. Mind you, I do like the fact that it saw an indexed view it could use for the State and Country details – that’s pretty neat. But yeah – users of my nifty website aren’t going to like how long that query takes. The frustrating thing is that I know that I can use the index to find locations that are within a particular distance of my locations quite easily, and Microsoft recommends this for solving the ‘nearest’ problem, as described at http://msdn.microsoft.com/en-au/library/ff929109.aspx. Now, in the first example on this page, it says that the query there will use the spatial index. But when I run it on my machine, it does nothing of the sort. I’m not particularly impressed. But what we see here is that parallelism has kicked in. In my scenario, it’s split the data up into 4 threads, but it’s still slow, and not using my index. It’s disappointing. But I can persuade it with hints! If I tell it to FORCESEEK, or use my index, or even turn off the parallelism with MAXDOP 1, then I get the index being used, and it’s a thing of beauty! Part of the plan is here: It’s massive, and it’s ugly, and it uses a TVF… but it’s quick. The way it works is to hook into the GeodeticTessellation function, which is essentially finds where the point is, and works out through the spatial index cells that surround it. This then provides a framework to be able to see into the spatial index for the items we want. You can read more about it at http://msdn.microsoft.com/en-us/library/bb895265.aspx#tessellation – including a bunch of pretty diagrams. One of those times when we have a much more complex-looking plan, but just because of the good that’s going on. This tessellation stuff was introduced in SQL Server 2012. But my query isn’t using it. When I try to use the FORCESEEK hint on the Person.Address table, I get the friendly error: Msg 8622, Level 16, State 1, Line 1 Query processor could not produce a query plan because of the hints defined in this query. Resubmit the query without specifying any hints and without using SET FORCEPLAN. And I’m almost tempted to just give up and move back to the old method of checking increasingly large circles around my location. After all, I can even leverage multiple OUTER APPLY clauses just like I did in my recent Lookup post. WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT     l.Name,     COALESCE(a1.AddressLine1,a2.AddressLine1,a3.AddressLine1),     COALESCE(a1.City,a2.City,a3.City),     s.Name AS [State],     c.Name AS Country FROM MyLocations AS l OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 1000     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a1 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 5000     AND a1.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a2 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 20000     AND a2.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a3 JOIN Person.StateProvince AS s     ON s.StateProvinceID = COALESCE(a1.StateProvinceID,a2.StateProvinceID,a3.StateProvinceID) JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; But this isn’t friendly-looking at all, and I’d use the method recommended by Isaac Kunen, who uses a table of numbers for the expanding circles. It feels old-school though, when I’m dealing with SQL 2012 (and later) versions. So why isn’t my query doing what it’s supposed to? Remember the query... WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Well, I just wasn’t reading http://msdn.microsoft.com/en-us/library/ff929109.aspx properly. The following requirements must be met for a Nearest Neighbor query to use a spatial index: A spatial index must be present on one of the spatial columns and the STDistance() method must use that column in the WHERE and ORDER BY clauses. The TOP clause cannot contain a PERCENT statement. The WHERE clause must contain a STDistance() method. If there are multiple predicates in the WHERE clause then the predicate containing STDistance() method must be connected by an AND conjunction to the other predicates. The STDistance() method cannot be in an optional part of the WHERE clause. The first expression in the ORDER BY clause must use the STDistance() method. Sort order for the first STDistance() expression in the ORDER BY clause must be ASC. All the rows for which STDistance returns NULL must be filtered out. Let’s start from the top. 1. Needs a spatial index on one of the columns that’s in the STDistance call. Yup, got the index. 2. No ‘PERCENT’. Yeah, I don’t have that. 3. The WHERE clause needs to use STDistance(). Ok, but I’m not filtering, so that should be fine. 4. Yeah, I don’t have multiple predicates. 5. The first expression in the ORDER BY is my distance, that’s fine. 6. Sort order is ASC, because otherwise we’d be starting with the ones that are furthest away, and that’s tricky. 7. All the rows for which STDistance returns NULL must be filtered out. But I don’t have any NULL values, so that shouldn’t affect me either. ...but something’s wrong. I do actually need to satisfy #3. And I do need to make sure #7 is being handled properly, because there are some situations (eg, differing SRIDs) where STDistance can return NULL. It says so at http://msdn.microsoft.com/en-us/library/bb933808.aspx – “STDistance() always returns null if the spatial reference IDs (SRIDs) of the geography instances do not match.” So if I simply make sure that I’m filtering out the rows that return NULL… …then it’s blindingly fast, I get the right results, and I’ve got the complex-but-brilliant plan that I wanted. It just wasn’t overly intuitive, despite being documented. @rob_farley

    Read the article

  • How to get the mic on the Creative X-Mod soundcard working correctly?

    - by Nyamiou The Galeanthrope
    Well, I have this problem for a while now. When my computer start the mic seem to work but it's like it's muted. I have to go to a terminal and type alsamixer -c 1 and then I set up PCM Capture Source on Line and set up it back to Mic to get the mic actually working. Is there is a way to do this automatically or to solve the problem. I use a special workaround on this card because of the bug #429642. My workaround is having this at the end of my /usr/share/pulseaudio/alsa-mixer/profile-sets/default.conf : [Mapping xmod-stereo-out] device-strings = surround51:%f description = Analog Stereo Creative Xmod channel-map = front-left,front-right paths-output = analog-output analog-output-headphones analog-output-mono analog-output-lfe-on-mono paths-input = analog-input analog-input-mic analog-input-linein analog-input-aux analog-input-video analog-input-tvtuner analog-input-fm analog-input-mic-line priority = 10 Maybe the bug come from here, maybe I have to change something. Thank you for any help.

    Read the article

  • I have a problem with a AE1200 Cisco/Linksys Wireless-N USB adapter having stopped working after I ran the update manager in Ubuntu 12.04

    - by user69670
    Here is the problem, I use a Cisco/Linksys AE1200 wireless network adapter to connect my desktop to a public wifi internet connection. I use ndiswrapper to use the windows driver and it had been working fine for me untill I ran the update manager overnight a few days ago. When I woke up it was asking for the normal computer restart to implement the changes but after rebooting the computer, the wireless adapter did not work, the status light on the adapter did not light up even though ubuntu recognizes it is there and according to ndiswrapper the drivers are loaded and the hardware is present. the grep command is being a bitch for some unknown reason today so this will be long sorry Output from "lspci": 00:00.0 Host bridge: Advanced Micro Devices [AMD] nee ATI Radeon Xpress 200 Host Bridge (rev 01) 00:01.0 PCI bridge: Advanced Micro Devices [AMD] nee ATI RS480 PCI Bridge 00:12.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB600 Non-Raid-5 SATA 00:13.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI0) 00:13.1 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI1) 00:13.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI2) 00:13.3 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI3) 00:13.4 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI4) 00:13.5 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB Controller (EHCI) 00:14.0 SMBus: Advanced Micro Devices [AMD] nee ATI SBx00 SMBus Controller (rev 13) 00:14.1 IDE interface: Advanced Micro Devices [AMD] nee ATI SB600 IDE 00:14.3 ISA bridge: Advanced Micro Devices [AMD] nee ATI SB600 PCI to LPC Bridge 00:14.4 PCI bridge: Advanced Micro Devices [AMD] nee ATI SBx00 PCI to PCI Bridge 01:05.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RC410 [Radeon Xpress 200] 02:02.0 Communication controller: Conexant Systems, Inc. HSF 56k Data/Fax Modem 02:03.0 Multimedia audio controller: Creative Labs CA0106 Soundblaster 02:05.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Output from "lsusb": Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 009: ID 13b1:0039 Linksys AE1200 802.11bgn Wireless Adapter [Broadcom BCM43235] Bus 003 Device 002: ID 045e:0053 Microsoft Corp. Optical Mouse Bus 004 Device 002: ID 1043:8006 iCreate Technologies Corp. Flash Disk 32-256 MB Output from "ifconfig": eth0 Link encap:Ethernet HWaddr 00:19:21:b6:af:7c UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:20 Base address:0xb400 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:13232 errors:0 dropped:0 overruns:0 frame:0 TX packets:13232 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1084624 (1.0 MB) TX bytes:1084624 (1.0 MB) Output from "iwconfig": lo no wireless extensions. eth0 no wireless extensions. Output from "lsmod": Module Size Used by nls_iso8859_1 12617 1 nls_cp437 12751 1 vfat 17308 1 fat 55605 1 vfat uas 17828 0 usb_storage 39646 1 nls_utf8 12493 1 udf 84366 1 crc_itu_t 12627 1 udf snd_ca0106 39279 2 snd_ac97_codec 106082 1 snd_ca0106 ac97_bus 12642 1 snd_ac97_codec snd_pcm 80845 2 snd_ca0106,snd_ac97_codec rfcomm 38139 0 snd_seq_midi 13132 0 snd_rawmidi 25424 2 snd_ca0106,snd_seq_midi bnep 17830 2 parport_pc 32114 0 bluetooth 158438 10 rfcomm,bnep ppdev 12849 0 snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51567 2 snd_seq_midi,snd_seq_midi_event snd_timer 28931 2 snd_pcm,snd_seq snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq snd 62064 11 snd_ca0106, snd_ac97_codec,snd_pcm,snd_rawj9fe snd_ca0106,snd_ac97_codec,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 14635 1 snd snd_page_alloc 14108 2 snd_ca0106,snd_pcm sp5100_tco 13495 0 i2c_piix4 13093 0 radeon 733693 3 ttm 65344 1 radeon drm_kms_helper 45466 1 radeon drm 197692 5 radeon,ttm,drm_kms_helper i2c_algo_bit 13199 1 radeon mac_hid 13077 0 shpchp 32325 0 ati_agp 13242 0 lp 17455 0 parport 40930 3 parport_pc,ppdev,lp usbhid 41906 0 hid 77367 1 usbhid 8139too 23283 0 8139cp 26759 0 pata_atiixp 12999 1 Output from "sudo lshw -C network": *-network description: Ethernet interface product: RTL-8139/8139C/8139C+ vendor: Realtek Semiconductor Co., Ltd. physical id: 5 bus info: pci@0000:02:05.0 logical name: eth0 version: 10 serial: 00:19:21:b6:af:7c size: 10Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 10 0bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=8139too driverversion=0.9.28 duplex=half latency=64 link=no maxlatency=64 mingnt=32 multicast=yes port=MII speed=10Mbit/s resources: irq:20 ioport:b400(size=256) memory:ff5fdc00-ff5fdcff Output from "iwlist scan": lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. Output from "lsb_release -d": Ubuntu 12.04 LTS Output from "uname -mr": 3.2.0-24-generic-pae i686 Output from "sudo /etc/init.d/networking restart": * Running /etc/init.d/networking restart is deprecated because it may not enable again some interfaces * Reconfiguring network interfaces... [ OK ]

    Read the article

  • Amnesia doesn't start due to audio problems

    - by james
    I have a problem with amnesia game. After Intro and clicking continue button few times, when game is supposed to start it crashes. Here is console output: ALSA lib pcm_dmix.c:1018:(snd_pcm_dmix_open) unable to open slave ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib pcm_dmix.c:957:(snd_pcm_dmix_open) The dmix plugin supports only playback stream ALSA lib pcm_dmix.c:1018:(snd_pcm_dmix_open) unable to open slave Cannot connect to server socket err = No such file or directory Cannot connect to server socket jack server is not running or cannot be started I should mention I have integrated both graphic and sound card.

    Read the article

  • Automatically select headphones when plugged in

    - by Joachim Pileborg
    When I plug headphones into my desktop computer, they are not automatically selected for output, instead all sound still goes through my S/PDIF output to the stereo. The headphones alternative is added in the sound settings, and I have to manually select it as output device. $ cat /proc/asound/pcm 00-00: ALC898 Analog : ALC898 Analog : playback 1 : capture 1 00-01: ALC898 Digital : ALC898 Digital : playback 1 00-02: ALC898 Analog : ALC898 Analog : capture 2 I have done a Google search, as well as search askubuntu.com, but none of the answers in the hits I found seems to help. Also, after listening with the headphones and then unplug them, the previous output is not automatically selected, so I have no sound at all then. I have to manually select the correct output in the settings.

    Read the article

  • Iptables blocking mysql port 3306

    - by valmar
    I got a Tomcat server running a web application that must access a mysql server via Hibernate on the same machine. So, I added a rule for port 3306 to my iptables script but tomcat cannot connect to the mysql server for some reason. I need to reset all iptables rules - Then tomcat can connect to the mysql server again. All the other iptables rules work perfectly though. What's wrong? Here is my script: iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p tcp --dport 24 -j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A OUTPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp -s localhost --dport 8009 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp -d localhost --dport 8009 -j ACCEPT iptables -A INPUT -p tcp -s localhost --dport 3306 -j ACCEPT iptables -A OUTPUT -p tcp -d localhost --dport 3306 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -j ACCEPT iptables -A OUTPUT -p tcp --dport 443 -j ACCEPT iptables -A INPUT -p tcp --dport 25 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 25 -j ACCEPT iptables -A INPUT -p tcp --dport 587 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 587 -j ACCEPT iptables -A INPUT -p tcp --dport 465 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 465 -j ACCEPT iptables -A INPUT -p tcp --dport 110 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 110 -j ACCEPT iptables -A INPUT -p tcp --dport 995 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 995 -j ACCEPT iptables -A INPUT -p tcp --dport 143 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 143 -j ACCEPT iptables -A INPUT -p tcp --dport 993 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 993 -j ACCEPT iptables -A INPUT -j DROP My /etc/hosts file: # nameserver config # IPv4 127.0.0.1 localhost 46.4.7.93 mydomain.com 46.4.7.93 Ubuntu-1004-lucid-64-minimal 46.4.7.93 horst # IPv6 ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts Having a look into the iptables logs, gives me this: Jun 22 16:52:43 Ubuntu-1004-lucid-64-minimal kernel: [ 435.111780] denied-input IN=lo OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:08:00 SRC=127.0.0.1 DST=127.0.0.1 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=52432 DF PROTO=TCP SPT=56108 DPT=8009 WINDOW=32792 RES=0x00 SYN URGP=0 Jun 22 16:52:46 Ubuntu-1004-lucid-64-minimal kernel: [ 438.110555] denied-input IN=lo OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:08:00 SRC=127.0.0.1 DST=127.0.0.1 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=52433 DF PROTO=TCP SPT=56108 DPT=8009 WINDOW=32792 RES=0x00 SYN URGP=0 Jun 22 16:52:46 Ubuntu-1004-lucid-64-minimal kernel: [ 438.231954] denied-input IN=lo OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:08:00 SRC=127.0.0.1 DST=127.0.0.1 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=48020 DF PROTO=TCP SPT=56109 DPT=8009 WINDOW=32792 RES=0x00 SYN URGP=0 Jun 22 16:52:49 Ubuntu-1004-lucid-64-minimal kernel: [ 441.229778] denied-input IN=lo OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:08:00 SRC=127.0.0.1 DST=127.0.0.1 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=48021 DF PROTO=TCP SPT=56109 DPT=8009 WINDOW=32792 RES=0x00 SYN URGP=0 Jun 22 16:53:57 Ubuntu-1004-lucid-64-minimal kernel: [ 508.731839] denied-input IN=eth0 OUT= MAC=6c:62:6d:85:bf:0e:00:26:88:75:dc:01:08:00 SRC=78.92.97.67 DST=46.4.7.93 LEN=64 TOS=0x00 PREC=0x00 TTL=122 ID=23053 DF PROTO=TCP SPT=1672 DPT=445 WINDOW=65535 RES=0x00 SYN URGP=0 Jun 22 16:53:59 Ubuntu-1004-lucid-64-minimal kernel: [ 511.625038] denied-input IN=eth0 OUT= MAC=6c:62:6d:85:bf:0e:00:26:88:75:dc:01:08:00 SRC=78.92.97.67 DST=46.4.7.93 LEN=64 TOS=0x00 PREC=0x00 TTL=122 ID=23547 DF PROTO=TCP SPT=1672 DPT=445 WINDOW=65535 RES=0x00 SYN URGP=0 Jun 22 16:54:22 Ubuntu-1004-lucid-64-minimal kernel: [ 533.981995] denied-input IN=eth0 OUT= MAC=6c:62:6d:85:bf:0e:00:26:88:75:dc:01:08:00 SRC=27.254.39.16 DST=46.4.7.93 LEN=48 TOS=0x00 PREC=0x00 TTL=117 ID=6549 PROTO=TCP SPT=6005 DPT=33796 WINDOW=64240 RES=0x00 ACK SYN URGP=0 Jun 22 16:54:44 Ubuntu-1004-lucid-64-minimal kernel: [ 556.297038] denied-input IN=eth0 OUT= MAC=6c:62:6d:85:bf:0e:00:26:88:75:dc:01:08:00 SRC=94.78.93.41 DST=46.4.7.93 LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=7712 PROTO=TCP SPT=57598 DPT=445 WINDOW=512 RES=0x00 SYN URGP=0

    Read the article

  • Is the output of Eclipse's incremental java compiler used in production? Or is it simply to support Eclipse's features?

    - by Doug T.
    I'm new to Java and Eclipse. One of my most recent discoveries was how Eclipse comes shipped with its own java compiler (ejc) for doing incremental builds. Eclipse seems to by default output incrementally built class files to the projRoot/bin folder. I've noticed too that many projects come with ant files to build the project that uses the java compiler built into the system for doing the production builds. Coming from a Windows/Visual Studio world where Visual Studio is invoking the compiler for both production and debugging, I'm used to the IDE having a more intimate relationship with the command-line compiler. I'm used to the project being the make file. So my mental model is a little off. Is whats produced by Eclipse ever used in production? Or is it typically only used to support Eclipse's features (ie its intellisense/incremental building/etc)? Is it typical that for the final "release" build of a project, that ant, maven, or another tool is used to do the full build from the command line? Mostly I'm looking for the general convention in the Eclipse/Java community. I realize that there may be some outliers out there who DO use ecj in production, but is this generally frowned upon? Or is this normal/accepted practice?

    Read the article

  • Shortest Common Superstring: find shortest string that contains all given string fragments

    - by occulus
    Given some string fragments, I would like to find the shortest possible single string ("output string") that contains all the fragments. Fragments can overlap each other in the output string. Example: For the string fragments: BCDA AGF ABC The following output string contains all fragments, and was made by naive appending: BCDAAGFABC However this output string is better (shorter), as it employs overlaps: ABCDAGF ^ ABC ^ BCDA ^ AGF I'm looking for algorithms for this problem. It's not absolutely important to find the strictly shortest output string, but the shorter the better. I'm looking for an algorithm better than the obvious naive one that would try appending all permutations of the input fragments and removing overlaps (which would appear to be NP-Complete). I've started work on a solution and it's proving quite interesting; I'd like to see what other people might come up with. I'll add my work-in-progress to this question in a while.

    Read the article

  • How to get espeak working?

    - by wisemonkey
    I'm trying read it loud feature of acrobat, so need a text synthesizer, I've installed espeak and libgnome-speech libraries (it didn't work for acrobat right out of the box) so when I started espeak-gui through command line it gave me segmentation fault next I tried only espeak and here is output: ALSA lib pcm.c:2212:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear ALSA lib pcm.c:2212:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe ALSA lib pcm.c:2212:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side ALSA lib audio/pcm_bluetooth.c:1613:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1613:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1613:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1613:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib pcm_dmix.c:957:(snd_pcm_dmix_open) The dmix plugin supports only playback stream Cannot connect to server socket err = No such file or directory Cannot connect to server socket jack server is not running or cannot be started Any ideas? or any alternative solutions for read it loud? Thanks

    Read the article

  • How to get a result from output parameter(SYS_REFCURSOR) of Oracle stored procedure in iBATIS 3(by u

    - by yjacket
    I got an example how to call oracle SP in iBATIS 3 without a map file. And now I understand how to call it. But I got another problem that how to get a result from output parameter(Oracle cursor). A part of exception messages is "There is no setter for property named 'rs' in 'class java.lang.Class". Below is my code. Does anyone can help me? Oracle Stored Procedure: CREATE OR REPLACE PROCEDURE getProducts ( rs OUT SYS_REFCURSOR ) IS BEGIN OPEN rs FOR SELECT * FROM Products; END getProducts; Interface: public interface ProductMapper { @Select("call getProducts(#{rs,mode=OUT,jdbcType=CURSOR})") @Options(statementType = StatementType.CALLABLE) List<Product> getProducts(); } DAO: public class ProductDAO { public List<Product> getProducts() { return mapper.getProducts(); // mapper is ProductMapper } } Full Error Message: Exception in thread "main" org.apache.ibatis.exceptions.IbatisException: ### Error querying database. Cause: org.apache.ibatis.reflection.ReflectionException: Could not set property 'rs' of 'class org.apache.ibatis.reflection.MetaObject$NullObject' with value 'oracle.jdbc.driver.OracleResultSetImpl@1a001ff' Cause: org.apache.ibatis.reflection.ReflectionException: There is no setter for property named 'rs' in 'class java.lang.Class' ### The error may involve defaultParameterMap ### The error occurred while setting parameters ### Cause: org.apache.ibatis.reflection.ReflectionException: Could not set property 'rs' of 'class org.apache.ibatis.reflection.MetaObject$NullObject' with value 'oracle.jdbc.driver.OracleResultSetImpl@1a001ff' Cause: org.apache.ibatis.reflection.ReflectionException: There is no setter for property named 'rs' in 'class java.lang.Class' at org.apache.ibatis.exceptions.ExceptionFactory.wrapException(ExceptionFactory.java:8) at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:61) at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:53) at org.apache.ibatis.binding.MapperMethod.executeForList(MapperMethod.java:82) at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:63) at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:35) at $Proxy8.getList(Unknown Source) at com.dao.ProductDAO.getList(ProductDAO.java:42) at com.Ibatis3Test.main(Ibatis3Test.java:30) Caused by: org.apache.ibatis.reflection.ReflectionException: Could not set property 'rs' of 'class org.apache.ibatis.reflection.MetaObject$NullObject' with value 'oracle.jdbc.driver.OracleResultSetImpl@1a001ff' Cause: org.apache.ibatis.reflection.ReflectionException: There is no setter for property named 'rs' in 'class java.lang.Class' at org.apache.ibatis.reflection.wrapper.BeanWrapper.setBeanProperty(BeanWrapper.java:154) at org.apache.ibatis.reflection.wrapper.BeanWrapper.set(BeanWrapper.java:36) at org.apache.ibatis.reflection.MetaObject.setValue(MetaObject.java:120) at org.apache.ibatis.executor.resultset.FastResultSetHandler.handleOutputParameters(FastResultSetHandler.java:69) at org.apache.ibatis.executor.statement.CallableStatementHandler.query(CallableStatementHandler.java:44) at org.apache.ibatis.executor.statement.RoutingStatementHandler.query(RoutingStatementHandler.java:55) at org.apache.ibatis.executor.SimpleExecutor.doQuery(SimpleExecutor.java:41) at org.apache.ibatis.executor.BaseExecutor.query(BaseExecutor.java:94) at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:72) at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:59) ... 7 more Caused by: org.apache.ibatis.reflection.ReflectionException: There is no setter for property named 'rs' in 'class java.lang.Class' at org.apache.ibatis.reflection.Reflector.getSetInvoker(Reflector.java:300) at org.apache.ibatis.reflection.MetaClass.getSetInvoker(MetaClass.java:97) at org.apache.ibatis.reflection.wrapper.BeanWrapper.setBeanProperty(BeanWrapper.java:146) ... 16 more

    Read the article

  • Output from OouraFFT correct sometimes but completely false other times. Why ?

    - by Yan
    Hi I am using Ooura FFT to compute the FFT of the accelerometer data in windows of 1024 samples. The code works fine, but then for some reason it produces very strange outputs, i.e. continuous spectrum with amplitudes of the order of 10^200. Here is the code: OouraFFT *myFFT=[[OouraFFT alloc] initForSignalsOfLength:1024 NumWindows:10]; // had to allocate it UIAcceleration *tempAccel = nil; double *input=(double *)malloc(1024 * sizeof(double)); double *frequency=(double *)malloc(1024*sizeof(double)); if (input) { //NSLog(@"%d",[array count]); for (int u=0; u<[array count]; u++) { tempAccel = (UIAcceleration *)[array objectAtIndex:u]; input[u]=tempAccel.z; //NSLog(@"%g",input[u]); } } myFFT.inputData=input; // specifies input data to myFFT [myFFT calculateWelchPeriodogramWithNewSignalSegment]; // calculates FFT for (int i=0;i<myFFT.dataLength;i++) // loop to copy output of myFFT, length of spectrumData is half of input data, so copy twice { if (i<myFFT.numFrequencies) { frequency[i]=myFFT.spectrumData[i]; // } else { frequency[i]=myFFT.spectrumData[myFFT.dataLength-i]; // copy twice } } for (int i=0;i<[array count];i++) { TransformedAcceleration *NewAcceleration=[[TransformedAcceleration alloc]init]; tempAccel=(UIAcceleration*)[array objectAtIndex:i]; NewAcceleration.timestamp=tempAccel.timestamp; NewAcceleration.x=tempAccel.x; NewAcceleration.y=tempAccel.z; NewAcceleration.z=frequency[i]; [newcurrentarray addObject:NewAcceleration]; // this does not work //[self replaceAcceleration:NewAcceleration]; //[NewAcceleration release]; [NewAcceleration release]; } TransformedAcceleration *a=nil;//[[TransformedAcceleration alloc]init]; // object containing fft of x,y,z accelerations for(int i=0; i<[newcurrentarray count]; i++) { a=(TransformedAcceleration *)[newcurrentarray objectAtIndex:i]; //NSLog(@"%d,%@",i,[a printAcceleration]); fprintf(fp,[[a printAcceleration] UTF8String]); //this is going wrong somewhow } fclose(fp); [array release]; [myFFT release]; //[array removeAllObjects]; [newcurrentarray release]; free(input); free(frequency);

    Read the article

  • How do I handle a missing mandatory argument in Ruby OptionParser?

    - by Rob Jones
    In OptionParser I can make an option mandatory, but if I leave out that value it will take the name of any following option as the value, screwing up the rest of the command line parsing. Here is a test case that echoes the values of the options: $ ./test_case.rb --input foo --output bar output bar input foo Now leave out the value for the first option: $ ./test_case.rb --input --output bar input --output Is there some way to prevent it taking another option name as a value? Thanks! Here is the test case code: #!/usr/bin/env ruby require 'optparse' files = Hash.new option_parser = OptionParser.new do |opts| opts.on('-i', '--input FILENAME', 'Input filename - required') do |filename| files[:input] = filename end opts.on('-o', '--output FILENAME', 'Output filename - required') do |filename| files[:output] = filename end end begin option_parser.parse!(ARGV) rescue OptionParser::ParseError $stderr.print "Error: " + $! + "\n" exit end files.keys.each do |key| print "#{key} #{files[key]}\n" end

    Read the article

  • C# How to output to GUI when data is coming via an interface via MarshalByRefObject?

    - by Tom
    Hey, can someone please show me how i can write the output of OnCreateFile to a GUI? I thought the GUI would have to be declared at the bottom in the main function, so how do i then refer to it within OnCreateFile? using System; using System.Collections.Generic; using System.Runtime.Remoting; using System.Text; using System.Diagnostics; using System.IO; using EasyHook; using System.Drawing; using System.Windows.Forms; namespace FileMon { public class FileMonInterface : MarshalByRefObject { public void IsInstalled(Int32 InClientPID) { //Console.WriteLine("FileMon has been installed in target {0}.\r\n", InClientPID); } public void OnCreateFile(Int32 InClientPID, String[] InFileNames) { for (int i = 0; i < InFileNames.Length; i++) { String[] s = InFileNames[i].ToString().Split('\t'); if (s[0].ToString().Contains("ROpen")) { //Console.WriteLine(DateTime.Now.Hour+":"+DateTime.Now.Minute+":"+DateTime.Now.Second+"."+DateTime.Now.Millisecond + "\t" + s[0] + "\t" + getProcessName(int.Parse(s[1])) + "\t" + getRootHive(s[2])); Program.ff.enterText(DateTime.Now.Hour + ":" + DateTime.Now.Minute + ":" + DateTime.Now.Second + "." + DateTime.Now.Millisecond + "\t" + s[0] + "\t" + getProcessName(int.Parse(s[1])) + "\t" + getRootHive(s[2])); } else if (s[0].ToString().Contains("RQuery")) { Console.WriteLine(DateTime.Now.Hour + ":" + DateTime.Now.Minute + ":" + DateTime.Now.Second + "." + DateTime.Now.Millisecond + "\t" + s[0] + "\t" + getProcessName(int.Parse(s[1])) + "\t" + getRootHive(s[2])); } else if (s[0].ToString().Contains("RDelete")) { Console.WriteLine(DateTime.Now.Hour + ":" + DateTime.Now.Minute + ":" + DateTime.Now.Second + "." + DateTime.Now.Millisecond + "\t" + s[0] + "\t" + getProcessName(int.Parse(s[0])) + "\t" + getRootHive(s[1])); } else if (s[0].ToString().Contains("FCreate")) { //Console.WriteLine(DateTime.Now.Hour+":"+DateTime.Now.Minute+":"+DateTime.Now.Second+"."+DateTime.Now.Millisecond + "\t" + s[0] + "\t" + getProcessName(int.Parse(s[1])) + "\t" + s[2]); } } } public void ReportException(Exception InInfo) { Console.WriteLine("The target process has reported an error:\r\n" + InInfo.ToString()); } public void Ping() { } public String getProcessName(int ID) { String name = ""; Process[] process = Process.GetProcesses(); for (int i = 0; i < process.Length; i++) { if (process[i].Id == ID) { name = process[i].ProcessName; } } return name; } public String getRootHive(String hKey) { int r = hKey.CompareTo("2147483648"); int r1 = hKey.CompareTo("2147483649"); int r2 = hKey.CompareTo("2147483650"); int r3 = hKey.CompareTo("2147483651"); int r4 = hKey.CompareTo("2147483653"); if (r == 0) { return "HKEY_CLASSES_ROOT"; } else if (r1 == 0) { return "HKEY_CURRENT_USER"; } else if (r2 == 0) { return "HKEY_LOCAL_MACHINE"; } else if (r3 == 0) { return "HKEY_USERS"; } else if (r4 == 0) { return "HKEY_CURRENT_CONFIG"; } else return hKey.ToString(); } } class Program : System.Windows.Forms.Form { static String ChannelName = null; public static Form1 ff; Program() // ADD THIS CONSTRUCTOR { InitializeComponent(); } static void Main() { try { Config.Register("A FileMon like demo application.", "FileMon.exe", "FileMonInject.dll"); RemoteHooking.IpcCreateServer<FileMonInterface>(ref ChannelName, WellKnownObjectMode.SingleCall); Process[] p = Process.GetProcesses(); for (int i = 0; i < p.Length; i++) { try { RemoteHooking.Inject(p[i].Id, "FileMonInject.dll", "FileMonInject.dll", ChannelName); } catch (Exception e) { } } } catch (Exception ExtInfo) { Console.WriteLine("There was an error while connecting to target:\r\n{0}", ExtInfo.ToString()); } } } }

    Read the article

  • Convert DVD to MP4 / H.264 with HD Decrypter and Handbrake

    - by DigitalGeekery
    Are you looking for a way to convert your DVD collection to high quality MP4 files? Today we are going to take a look at using DVDFab HD Decrypter along with Handbrake to convert DVDs to MP4 using the H.264 codec.  Process Overview Handbrake is a great file conversion application, but it unfortunately can’t handle DVD copy protection. For that we will use DVDFab’s HD Decrypter. HD Decrypter is the always free portion of the DVDFab application. What HD Decrypter will do, is remove the copy protection from your DVD, and copy the Video-TS and Audio-TS folders to your hard drive. Once the copy protection is gone, we will use Handbrake to convert the files to MP4 format with H.264 compression. Note: You’ll get full access to all the options in DVDFab  during the 30 trial period. However, the HD Decrypter is free and will continue to work. Ripping the DVD Install both Handbrake and DVDFab HD Decrypter. (Download links below) Once the applications are installed, place your DVD into your DVD drive and open DVDFab. On the welcome screen, click “Start DVDFab.”   You’ll be prompted to choose your region. Click “OK.” The disc is analyzed and opened… You’ll be brought to the main interface. Make sure you have the Full Disc option selected at the left panel and “Copy DVD-Video (VIDEO_TS folder) is selected. Click “Start.” Don’t be confused by the “DVD to DVD” option pop up. We won’t actually be burning to DVD. The HD Decrypter portion of the DVDFab suite is part of the DVD to DVD option. Click “OK.” The DVD will be ripped to your hard drive. When the copy process is complete, you’ll be prompted to insert media to start the write process. We aren’t going to be burning to disc, so just click Cancel then close out of DVDFab.   Converting to MP4 Now we are ready to convert Open Handbrake and click on the “Source” button at the top left. Select DVD / VIDEO_TS folder from the drop down list. Now we need to browse for the location where DVDFab HD Decrypter copied your movie. By default, that location will be the \DVDFab\Temp\FullDisc directory in your Documents folder. For example, in Windows 7, it would be: C:\Users\%username%\Documents\DVDFab\Temp\FullDisc\[Name of Your DVD] Select the folder, and click “OK.” You may be prompted to set a default path in Handbrake. This is an optional step. Click “OK.” If you’d like to set a default destination folder, Go to Tools on the top menu, select Options. On the General tab, click “Browse” to select a destination output folder. Click “Close” when Finished.   Next, click the dropdown list next to “Title.” Select the title that matches the length of the movie. It’s possible you may have see more than one title with a similar length. If so, consult the DVD information, or a site like IMDB.com, to find the proper movie title length. Select your container under Output Settings. This will be your final output file extension. We will be using MP4 for this example. You also have the option of MKV.   If you didn’t set up a default destination folder, you’ll need to select one by clicking the “Browse” button. You can manually customize the output file name and change the output file extension to .mp4 (Unless you prefer the iPod friendly .m4v extension). Settings There are a variety of custom settings that can be changed either through the tabs listed under Output Settings, or by selecting one of the Presets to the right. If converting exclusively for any of the devices listed in the preset list, simply click on that device and the settings will be automatically applied in the Output Settings tabs. For more Universal (non-Apple) devices or output, select the Normal profile.   For the most part, the presets will suit quite nicely. However, you can further customize settings if you’d like. The Picture tab allows you to tweak the size or cropping region. You must change Anamorphic to Loose or Custom to change the size.   The Video tab allows you to choose your codec. H.264 is the default. You also have the option to choose a target (output) size. The Constant Quality is recommended to be set between 59% – 63%. Anything over 70% will likely result in an output file larger than the input without any improved quality. On the Subtitles tab, you can select an available subtitle from the dropdown list and click “Add” to add it to the output file. When you’ve finished any customizations you are ready to begin the conversion process. Click “Start.” A Command window will open and you can follow the process. You’ll probably want to find something to do in the meantime as the process could take a couple of hours. When the process completes, you’re ready to watch your video.   Although it’s a time consuming process that involves a couple steps, this method will give you high quality H.264 video files. If you want to rip and burn your DVD’s to ISO check out our article on how to rip and convert DVD’s to an ISO image. Links Download DVDFab HD Decrypter (Part of the DVDFab suite) Download Handbrake Similar Articles Productive Geek Tips Enjoy Quick & Easy Unit Conversion with Convert for WindowsConvert Older Excel Documents to Excel 2007 FormatCalculate with Qalculate on LinuxHow To Convert Video Files to MP3 with VLCConvert a Row to a Column in Excel the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone

    Read the article

  • Deferred rendering with VSM - Scaling light depth loses moments

    - by user1423893
    I'm calculating my shadow term using a VSM method. This works correctly when using forward rendered lights but fails with deferred lights. // Shadow term (1 = no shadow) float shadow = 1; // [Light Space -> Shadow Map Space] // Transform the surface into light space and project // NB: Could be done in the vertex shader, but doing it here keeps the // "light shader" abstraction and doesn't limit the number of shadowed lights float4x4 LightViewProjection = mul(LightView, LightProjection); float4 surf_tex = mul(position, LightViewProjection); // Re-homogenize // 'w' component is not used in later calculations so no need to homogenize (it will equal '1' if homogenized) surf_tex.xyz /= surf_tex.w; // Rescale viewport to be [0,1] (texture coordinate system) float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = -surf_tex.y * 0.5f + 0.5f; // Half texel offset //shadow_tex += (0.5 / 512); // Scaled distance to light (instead of 'surf_tex.z') float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; //float rescaled_dist_to_light = surf_tex.z; // [Variance Shadow Map Depth Calculation] // No filtering float2 moments = tex2D(ShadowSampler, shadow_tex).xy; // Flip the moments values to bring them back to their original values moments.x = 1.0 - moments.x; moments.y = 1.0 - moments.y; // Compute variance float E_x2 = moments.y; float Ex_2 = moments.x * moments.x; float variance = E_x2 - Ex_2; variance = max(variance, Bias.y); // Surface is fully lit if the current pixel is before the light occluder (lit_factor == 1) // One-tailed inequality valid if float lit_factor = (rescaled_dist_to_light <= moments.x - Bias.x); // Compute probabilistic upper bound (mean distance) float m_d = moments.x - rescaled_dist_to_light; // Chebychev's inequality float p = variance / (variance + m_d * m_d); p = ReduceLightBleeding(p, Bias.z); // Adjust the light color based on the shadow attenuation shadow *= max(lit_factor, p); This is what I know for certain so far: The lighting is correct if I do not try and calculate the shadow term. (No shadows) The shadow term is correct when calculated using forward rendered lighting. (VSM works with forward rendered lights) With the current rescaled light distance (lightAttenuation.y is the far plane value): float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; The light is correct and the shadow appears to be zoomed in and misses the blurring: When I do not rescale the light and use the homogenized 'surf_tex': float rescaled_dist_to_light = surf_tex.z; the shadows are blurred correctly but the lighting is incorrect and the cube model is no longer lit Why is scaling by the far plane value (LightAttenuation.y) zooming in too far? The only other factor involved is my world pixel position, which is calculated as follows: // [Position] float4 position; // [Screen Position] position.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above position.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component position.z = 1.0 - position.z; position.w = 1.0; // 1.0 = position.w / position.w // [World Position] position = mul(position, CameraViewProjectionInverse); // Re-homogenize position (xyz AND w, otherwise shadows will bend when camera is close) position.xyz /= position.w; position.w = 1.0; Using the inverse matrix of the camera's view x projection matrix does work for lighting but maybe it is incorrect for shadow calculation? EDIT: Light calculations for shadow including 'dist_to_light' // Work out the light position and direction in world space float3 light_position = float3(LightViewInverse._41, LightViewInverse._42, LightViewInverse._43); // Direction might need to be negated float3 light_direction = float3(-LightViewInverse._31, -LightViewInverse._32, -LightViewInverse._33); // Unnormalized light vector float3 dir_to_light = light_position - position; // Direction from vertex float dist_to_light = length(dir_to_light); // Normalise 'toLight' vector for lighting calculations dir_to_light = normalize(dir_to_light); EDIT2: These are the calculations for the moments (depth) //============================================= //---[Vertex Shaders]-------------------------- //============================================= DepthVSOutput depth_VS( float4 Position : POSITION, uniform float4x4 shadow_view, uniform float4x4 shadow_view_projection) { DepthVSOutput output = (DepthVSOutput)0; // First transform position into world space float4 position_world = mul(Position, World); output.position_screen = mul(position_world, shadow_view_projection); output.light_vec = mul(position_world, shadow_view).xyz; return output; } //============================================= //---[Pixel Shaders]--------------------------- //============================================= DepthPSOutput depth_PS(DepthVSOutput input) { DepthPSOutput output = (DepthPSOutput)0; // Work out the depth of this fragment from the light, normalized to [0, 1] float2 depth; depth.x = length(input.light_vec) / FarPlane; depth.y = depth.x * depth.x; // Flip depth values to avoid floating point inaccuracies depth.x = 1.0f - depth.x; depth.y = 1.0f - depth.y; output.depth = depth.xyxy; return output; } EDIT 3: I have tried the folloiwng: float4 pp; pp.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above pp.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component pp.z = 1.0 - pp.z; pp.w = 1.0; // 1.0 = position.w / position.w // Determine the depth of the pixel with respect to the light float4x4 LightViewProjection = mul(LightView, LightProjection); float4x4 matViewToLightViewProj = mul(CameraViewProjectionInverse, LightViewProjection); float4 vPositionLightCS = mul(pp, matViewToLightViewProj); float fLightDepth = vPositionLightCS.z / vPositionLightCS.w; // Transform from light space to shadow map texture space. float2 vShadowTexCoord = 0.5 * vPositionLightCS.xy / vPositionLightCS.w + float2(0.5f, 0.5f); vShadowTexCoord.y = 1.0f - vShadowTexCoord.y; // Offset the coordinate by half a texel so we sample it correctly vShadowTexCoord += (0.5f / 512); //g_vShadowMapSize This suffers the same problem as the second picture. I have tried storing the depth based on the view x projection matrix: output.position_screen = mul(position_world, shadow_view_projection); //output.light_vec = mul(position_world, shadow_view); output.light_vec = output.position_screen; depth.x = input.light_vec.z / input.light_vec.w; This gives a shadow that has lots surface acne due to horrible floating point precision errors. Everything is lit correctly though. EDIT 4: Found an OpenGL based tutorial here I have followed it to the letter and it would seem that the uv coordinates for looking up the shadow map are incorrect. The source uses a scaled matrix to get the uv coordinates for the shadow map sampler /// <summary> /// The scale matrix is used to push the projected vertex into the 0.0 - 1.0 region. /// Similar in role to a * 0.5 + 0.5, where -1.0 < a < 1.0. /// <summary> const float4x4 ScaleMatrix = float4x4 ( 0.5, 0.0, 0.0, 0.0, 0.0, -0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); I had to negate the 0.5 for the y scaling (M22) in order for it to work but the shadowing is still not correct. Is this really the correct way to scale? float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = surf_tex.y * -0.5f + 0.5f; The depth calculations are exactly the same as the source code yet they still do not work, which makes me believe something about the uv calculation above is incorrect.

    Read the article

  • elffile: ELF Specific File Identification Utility

    - by user9154181
    Solaris 11 has a new standard user level command, /usr/bin/elffile. elffile is a variant of the file utility that is focused exclusively on linker related files: ELF objects, archives, and runtime linker configuration files. All other files are simply identified as "non-ELF". The primary advantage of elffile over the existing file utility is in the area of archives — elffile examines the archive members and can produce a summary of the contents, or per-member details. The impetus to add elffile to Solaris came from the effort to extend the format of Solaris archives so that they could grow beyond their previous 32-bit file limits. That work introduced a new archive symbol table format. Now that there was more than one possible format, I thought it would be useful if the file utility could identify which format a given archive is using, leading me to extend the file utility: % cc -c ~/hello.c % ar r foo.a hello.o % file foo.a foo.a: current ar archive, 32-bit symbol table % ar r -S foo.a hello.o % file foo.a foo.a: current ar archive, 64-bit symbol table In turn, this caused me to think about all the things that I would like the file utility to be able to tell me about an archive. In particular, I'd like to be able to know what's inside without having to unpack it. The end result of that train of thought was elffile. Much of the discussion in this article is adapted from the PSARC case I filed for elffile in December 2010: PSARC 2010/432 elffile Why file Is No Good For Archives And Yet Should Not Be Fixed The standard /usr/bin/file utility is not very useful when applied to archives. When identifying an archive, a user typically wants to know 2 things: Is this an archive? Presupposing that the archive contains objects, which is by far the most common use for archives, what platform are the objects for? Are they for sparc or x86? 32 or 64-bit? Some confusing combination from varying platforms? The file utility provides a quick answer to question (1), as it identifies all archives as "current ar archive". It does nothing to answer the more interesting question (2). To answer that question, requires a multi-step process: Extract all archive members Use the file utility on the extracted files, examine the output for each file in turn, and compare the results to generate a suitable summary description. Remove the extracted files It should be easier and more efficient to answer such an obvious question. It would be reasonable to extend the file utility to examine archive contents in place and produce a description. However, there are several reasons why I decided not to do so: The correct design for this feature within the file utility would have file examine each archive member in turn, applying its full abilities to each member. This would be elegant, but also represents a rather dramatic redesign and re-implementation of file. Archives nearly always contain nothing but ELF objects for a single platform, so such generality in the file utility would be of little practical benefit. It is best to avoid adding new options to standard utilities for which other implementations of interest exist. In the case of the file utility, one concern is that we might add an option which later appears in the GNU version of file with a different and incompatible meaning. Indeed, there have been discussions about replacing the Solaris file with the GNU version in the past. This may or may not be desirable, and may or may not ever happen. Either way, I don't want to preclude it. Examining archive members is an O(n) operation, and can be relatively slow with large archives. The file utility is supposed to be a very fast operation. I decided that extending file in this way is overkill, and that an investment in the file utility for better archive support would not be worth the cost. A solution that is more narrowly focused on ELF and other linker related files is really all that we need. The necessary code for doing this already exists within libelf. All that is missing is a small user-level wrapper to make that functionality available at the command line. In that vein, I considered adding an option for this to the elfdump utility. I examined elfdump carefully, and even wrote a prototype implementation. The added code is small and simple, but the conceptual fit with the rest of elfdump is poor. The result complicates elfdump syntax and documentation, definite signs that this functionality does not belong there. And so, I added this functionality as a new user level command. The elffile Command The syntax for this new command is elffile [-s basic | detail | summary] filename... Please see the elffile(1) manpage for additional details. To demonstrate how output from elffile looks, I will use the following files: FileDescription configA runtime linker configuration file produced with crle dwarf.oAn ELF object /etc/passwdA text file mixed.aArchive containing a mixture of ELF and non-ELF members mixed_elf.aArchive containing ELF objects for different machines not_elf.aArchive containing no ELF objects same_elf.aArchive containing a collection of ELF objects for the same machine. This is the most common type of archive. The file utility identifies these files as follows: % file config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: ascii text mixed.a: current ar archive, 32-bit symbol table mixed_elf.a: current ar archive, 32-bit symbol table not_elf.a: current ar archive same_elf.a: current ar archive, 32-bit symbol table By default, elffile uses its "summary" output style. This output differs from the output from the file utility in 2 significant ways: Files that are not an ELF object, archive, or runtime linker configuration file are identified as "non-ELF", whereas the file utility attempts further identification for such files. When applied to an archive, the elffile output includes a description of the archive's contents, without requiring member extraction or other additional steps. Applying elffile to the above files: % elffile config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: non-ELF mixed.a: current ar archive, 32-bit symbol table, mixed ELF and non-ELF content mixed_elf.a: current ar archive, 32-bit symbol table, mixed ELF content not_elf.a: current ar archive, non-ELF content same_elf.a: current ar archive, 32-bit symbol table, ELF 64-bit LSB relocatable AMD64 Version 1 The output for same_elf.a is of particular interest: The vast majority of archives contain only ELF objects for a single platform, and in this case, the default output from elffile answers both of the questions about archives posed at the beginning of this discussion, in a single efficient step. This makes elffile considerably more useful than file, within the realm of linker-related files. elffile can produce output in two other styles, "basic", and "detail". The basic style produces output that is the same as that from 'file', for linker-related files. The detail style produces per-member identification of archive contents. This can be useful when the archive contents are not homogeneous ELF object, and more information is desired than the summary output provides: % elffile -s detail mixed.a mixed.a: current ar archive, 32-bit symbol table mixed.a(dwarf.o): ELF 32-bit LSB relocatable 80386 Version 1 mixed.a(main.c): non-ELF content mixed.a(main.o): ELF 64-bit LSB relocatable AMD64 Version 1 [SSE]

    Read the article

  • Analytic functions – they’re not aggregates

    - by Rob Farley
    SQL 2012 brings us a bunch of new analytic functions, together with enhancements to the OVER clause. People who have known me over the years will remember that I’m a big fan of the OVER clause and the types of things that it brings us when applied to aggregate functions, as well as the ranking functions that it enables. The OVER clause was introduced in SQL Server 2005, and remained frustratingly unchanged until SQL Server 2012. This post is going to look at a particular aspect of the analytic functions though (not the enhancements to the OVER clause). When I give presentations about the analytic functions around Australia as part of the tour of SQL Saturdays (starting in Brisbane this Thursday), and in Chicago next month, I’ll make sure it’s sufficiently well described. But for this post – I’m going to skip that and assume you get it. The analytic functions introduced in SQL 2012 seem to come in pairs – FIRST_VALUE and LAST_VALUE, LAG and LEAD, CUME_DIST and PERCENT_RANK, PERCENTILE_CONT and PERCENTILE_DISC. Perhaps frustratingly, they take slightly different forms as well. The ones I want to look at now are FIRST_VALUE and LAST_VALUE, and PERCENTILE_CONT and PERCENTILE_DISC. The reason I’m pulling this ones out is that they always produce the same result within their partitions (if you’re applying them to the whole partition). Consider the following query: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     LAST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; This is designed to get the TotalDue for the first order of the year, the last order of the year, and also the 95% percentile, using both the continuous and discrete methods (‘discrete’ means it picks the closest one from the values available – ‘continuous’ means it will happily use something between, similar to what you would do for a traditional median of four values). I’m sure you can imagine the results – a different value for each field, but within each year, all the rows the same. Notice that I’m not grouping by the year. Nor am I filtering. This query gives us a result for every row in the SalesOrderHeader table – 31465 in this case (using the original AdventureWorks that dates back to the SQL 2005 days). The RANGE BETWEEN bit in FIRST_VALUE and LAST_VALUE is needed to make sure that we’re considering all the rows available. If we don’t specify that, it assumes we only mean “RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW”, which means that LAST_VALUE ends up being the row we’re looking at. At this point you might think about other environments such as Access or Reporting Services, and remember aggregate functions like FIRST. We really should be able to do something like: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING) FROM Sales.SalesOrderHeader GROUP BY YEAR(OrderDate) ; But you can’t. You get that age-old error: Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.OrderDate' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.SalesOrderID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Hmm. You see, FIRST_VALUE isn’t an aggregate function. None of these analytic functions are. There are too many things involved for SQL to realise that the values produced might be identical within the group. Furthermore, you can’t even surround it in a MAX. Then you get a different error, telling you that you can’t use windowed functions in the context of an aggregate. And so we end up grouping by doing a DISTINCT. SELECT DISTINCT     YEAR(OrderDate),         FIRST_VALUE(TotalDue)              OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),         LAST_VALUE(TotalDue)             OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)          WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; I’m sorry. It’s just the way it goes. Hopefully it’ll change the future, but for now, it’s what you’ll have to do. If we look in the execution plan, we see that it’s incredibly ugly, and actually works out the results of these analytic functions for all 31465 rows, finally performing the distinct operation to convert it into the four rows we get in the results. You might be able to achieve a better plan using things like TOP, or the kind of calculation that I used in http://sqlblog.com/blogs/rob_farley/archive/2011/08/23/t-sql-thoughts-about-the-95th-percentile.aspx (which is how PERCENTILE_CONT works), but it’s definitely convenient to use these functions, and in time, I’m sure we’ll see good improvements in the way that they are implemented. Oh, and this post should be good for fellow SQL Server MVP Nigel Sammy’s T-SQL Tuesday this month.

    Read the article

  • My current iptable configuration doesn't work [on hold]

    - by Brad
    sudo chkconfig iptables off /etc/init.d/iptables on ### Clear/flush iptables sudo iptables -F sudo iptables -P INPUT ACCEPT sudo iptables -P OUTPUT ACCEPT sudo iptables -P FORWARD ACCEPT ### Allow SSH iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT ### Allow YUM updates sudo iptables -A OUTPUT -o eth0 -p tcp --dport 80 --match owner --uid-owner 0 --state NEW,ESTABLISHED -j ACCEPT sudo iptables -A OUTPUT -o eth0 -p tcp --dport 443 --match owner --uid-owner 0 --state NEW,ESTABLISHED -j ACCEPT ### Add your rules form the link above, here # ftp,smtp,imap,http,https,pop3,imaps,pop3s sudo iptables -A INPUT -i eth0 -p tcp -m multiport --dports 21,25,143,80,443,110,993,995 -m state --state NEW,ESTABLISHED -j ACCEPT sudo iptables -A OUTPUT -o eth0 -p tcp -m multiport --sports 21,25,143,80,110,443,993,995 -m state --state NEW,ESTABLISHED -j ACCEPT ## allow dns sudo iptables -A OUTPUT -p udp -o eth0 --dport 53 -j ACCEPT && sudo iptables -A INPUT -p udp -i eth0 --sport 53 -j ACCEPT # handling pings sudo iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT && sudo iptables -A OUTPUT -p icmp --icmp-type echo-reply -j ACCEPT sudo iptables -A OUTPUT -p icmp --icmp-type echo-request -j ACCEPT && sudo iptables -A INPUT -p icmp --icmp-type echo-reply -j ACCEPT # manage ddos attacks sudo iptables -A INPUT -p tcp --dport 80 -m limit --limit 25/minute --limit-burst 100 -j ACCEPT ## Implement some logging so that we know what's getting dropped sudo iptables -N LOGGING sudo iptables -A INPUT -j LOGGING sudo iptables -A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IPTables Packet Dropped: " --log-level 7 sudo iptables -A LOGGING -j DROP # once a rule affects traffic then it is no longer managed # so if the traffic has not been accepted, block it sudo iptables -A INPUT -j DROP sudo iptables -I INPUT 1 -i lo -j ACCEPT sudo iptables -A OUTPUT -j DROP # allow only internal port forwarding sudo iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT sudo iptables -P FORWARD DROP # create an iptables config file sudo iptables-save > /root/dsl.fw ### Append the following to the rc.local file sudo nano /etc/rc.local ####--- /sbin/iptables-restore < sudo /root/dsl.fw ####--- /etc/init.d/iptables save ## check to see if this setting is working great. sudo service iptables restart ## log out/in testing sudo chkconfig iptables on What is the problem with this setup? If I restart the server it doesn't allow me back in SSH, and there may be a problem with Yum Original source of information: https://gist.github.com/Jonathonbyrd/1274837#file-instructions

    Read the article

  • DirectX9 / HLSL Shader Model 3 - Passing Doubles between Shaders

    - by P. Avery
    I need higher precision on a few values within my vertex and pixel shaders...I'm currently using floats, so I would like to use doubles...I've read that HLSL Model 4 has two functions to convert a double into two unsigned integers and back again( asuint() and asdouble() ). These functions are only supported on HLSL 4 and I am using DirectX 9 which will only compile HLSL Model 3 and below... How can I pass a double between shaders? here is implementation for HLSL 4: struct VS_INPUT { float2 v; }; struct PS_INPUT { uint a; uint b; uint c; uint d; }; PS_INPUT VertexShader( VS_INPUT Input ) { PS_INPUT Output = ( PS_INPUT )0; double2 vPos = mul( Input.v, mWorld ).xy; asuint( vPos.x, Output.a, Output.b ); asuint( vPos.y, Output.c, Output.d ); return Output; } float4 PixelShader( PS_INPUT Input ) { double2 vPos; vPos.x = asdouble( Input.a, Input.b ); vPos.y = asdouble( Input.c, Input.d ); ... return 1; }

    Read the article

  • Enable H.264 (or x264) in AVIDemux

    - by Thomas
    I am trying to get AVIDemux set up with the X264 codec using this tutorial. The following is what goes down when I get to the ./configure --enable-mp4-output command Thomas-Phillipss-MacBook:x264 tomdabomb2u$ sudo ./configure --enable-mp4-output Password: Unknown option --enable-mp4-output, ignored Found no assembler Minimum version is yasm-0.6.2 If you really want to compile without asm, configure with --disable-asm. So I tried it. Thomas-Phillipss-MacBook:x264 tomdabomb2u$ sudo ./configure --enable-mp4-output --disable-asm Unknown option --enable-mp4-output, ignored Warning: gpac is too old, update to 2007-06-21 UTC or later Platform: X86_64 System: MACOSX asm: no avs: no lavf: no ffms: no gpac: no pthread: yes filters: crop select_every debug: no gprof: no PIC: no shared: no visualize: no bit depth: 8 You can run 'make' or 'make fprofiled' now. I issued make, and then Thomas-Phillipss-MacBook:x264 tomdabomb2u$ ./x264 -v -q 20 -o foreman.mp4 foreman_part_qcif.yuv 176x144. And as expected, the results are: x264 [error]: not compiled with MP4 output support So I'm stuck. Any ideas?

    Read the article

  • Android SDK having trouble with ADB

    - by MowDownJoe
    So, I installed the Android SDK, Eclipse, and the ADT. Upon firing up Eclipse the first time after setting up the ADT, this error popped up: [2012-05-29 12:11:06 - adb] /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-05-29 12:11:06 - adb] 'adb version' failed! /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-05-29 12:11:06 - adb] Failed to parse the output of 'adb version': Standard Output was: Error Output was: /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-05-29 12:11:06 - adb] /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-05-29 12:11:06 - adb] 'adb version' failed! /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-05-29 12:11:06 - adb] Failed to parse the output of 'adb version': Standard Output was: Error Output was: /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory I'm not quite sure how this is. Feels weird that there's a missing library there. I'm using Ubuntu 12.04. No adb is a pretty big blow as an Android developer. How do I fix?

    Read the article

  • can anyone help me through the preparation of Eclipse IDE for android developer in ubuntu 12.04?

    - by csbl
    I'm new to to Linux, in this particular case, to Ubuntu. I have a small android project I have to finnish until this Friday and I'm still stuck with the install and preparing of the development envrironment. The only thing I did was install the Eclipse IDE. I'm still missing the SDK, JAVA and anything else that might be needed. Can someone help me through this? It's only because I'm running out of time to develop, or else I would embarc on a deeper investigation of this OS. I tried the step to install android platforms, through Eclispe-Help-Install New Software, and I got the follwoing error messages, at the end of the process: [2012-06-06 17:35:56 - adb] /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] 'adb version' failed! /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] Failed to parse the output of 'adb version': Standard Output was: Error Output was: /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] 'adb version' failed! /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] Failed to parse the output of 'adb version': Standard Output was: Error Output was: /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory Can anyone help, please??

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >