Search Results

Search found 17852 results on 715 pages for 'load balancer'.

Page 561/715 | < Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >

  • Is Rails Metal (& Rack) a good way to implement a high traffic web service api?

    - by Greg
    I am working on a very typical web application. The main component of the user experience is a widget that a site owner would install on their front page. Every time their front page loads, the widget talks to our server and displays some of the data that returns. So there are two components to this web application: the front end UI that the site owner uses to configure their widget the back end component that responds to the widget's web api call Previously we had all of this running in PHP. Now we are experimenting with Rails, which is fantastic for #1 (the front end UI). The question is how to do #2, the back serving of widget information, efficiently. Obviously this is much higher load than the front end, since it is called every time the front page loads on one of our clients' websites. I can see two obvious approaches: A. Parallel Stack: Set up a parallel stack that uses something other than rails (e.g. our old PHP-based approach) but accesses the same database as the front end B. Rails Metal: Use Rails Metal/Rack to bypass the Rails routing mechanism, but keep the api call responder within the Rails app My main question: Is Rails/Metal a reasonable approach for something like this? But also... Will the overhead of loading the Rails environment still be too heavy? Is there a way to get even closer to the metal with Rails, bypassing most of the environment? Will Rails/Metal performance approach the perf of a similar task on straight PHP (just looking for ballpark here)? And... Is there a 'C' option that would be much better than both A and B? That is, something before going to the lengths of C code compiled to binary and installed as an nginx or apache module? Thanks in advance for any insights.

    Read the article

  • YUV Textures and Shaders

    - by Luca
    I've always used RGB textures. Now comes up the need of use of YUV textures (a set of three texture, specifying 1 luminance and 2 chrominance channels). Of course the YUV texture could be converted on CPU, getting the RGB texture usable as usual... but I need to get RGB pixel directly on GPU, to avoid unnecessary processor load... The problem became strange, since I require to specifyin the shader source, because a single texture, the following items: Three samplers uniforms, one for each channel Two integer uniforms, for specifying the chrominance channels sampling a mat3 uniform, for specific YUV to RGB conversion matrix. This should be done for each YUV texture... Is it possible to "compress" required uniforms, and getting RGB values quite easily? Actually i think this could aid: Texture sizes, including mipmaps, could be queried. With this, its possible to save the two integer uniforms, since the uniform values are derived the ratio between texture extents The mat3 uniforms could be collected as globals, and with preprocessor could be selected. But what design should I use for specify three (related) textures? Is it possible to use textures levels for accessing multiple textures? Texture arrays could be usable? And what about using rectangle textures, which doesn't supports mipmaps? Maybe a shader abstraction (struct definition and related function) could aid? Thank you.

    Read the article

  • Unicode Collations problem ?

    - by Bayonian
    (.NET 3.5 SP1, VS 2008, VB.NET, MSSQL Server 2008) I'm writing a small web app to test the Khmer Unicode and Lao Unicode. I have a table that store text in Khmer Unicode with the following structure : [t_id] [int] IDENTITY(1,1) NOT NULL [t_chid] [int] NOT NULL [t_vn] [int] NOT NULL [t_v] [nvarchar](max) NOT NULL I can use Linq to SQL to do CRUD normally. The text display properly on the web page, even though I didn't change the default collation of MSSQL Server 2008. When it comes to search the column [t_v], the page will take a very long time to load and in fact, it loads every row of that column. It never compares with the "key word" criteria that I use for the search. Here's my query for the search : Public Shared Function SearchTestingKhmerTable(ByVal keyword As String) As DataTable Dim db As New BibleDataClassesDataContext() Dim query = From b In db.khmer_books _ From ch In db.khmer_chapters _ From v In db.testing_khmers _ Where v.t_v.Contains(keyword) And ch.kh_book_id = b.kh_b_id And v.t_chid = ch.kh_ch_id _ Select b.kh_b_id, b.kh_b_title, ch.kh_ch_id, ch.kh_ch_number, v.t_id, v.t_vn, v.t_v Dim dtDataTableOne = New DataTable("dtOne") dtDataTableOne.Columns.Add("bid", GetType(Integer)) dtDataTableOne.Columns.Add("btitle", GetType(String)) dtDataTableOne.Columns.Add("chid", GetType(Integer)) dtDataTableOne.Columns.Add("chn", GetType(Integer)) dtDataTableOne.Columns.Add("vid", GetType(Integer)) dtDataTableOne.Columns.Add("vn", GetType(Integer)) dtDataTableOne.Columns.Add("verse", GetType(String)) For Each r In query dtDataTableOne.Rows.Add(New Object() {r.kh_b_id, r.kh_b_title, r.kh_ch_id, r.kh_ch_number, r.t_id, r.t_vn, r.t_v}) Next Return dtDataTableOne End Function Please note that I use the exact same code and database design with Lao Unicode and it works just fine. I get the returned query as expected for the search. I can't figure out what the problem with searching for query in Khmer table.

    Read the article

  • DataContractSerializer does not properly deserialize, values for methods in object are missing

    - by sachin
    My SomeClass [Serializable] [DataContract(Namespace = "")] public class SomeClass { [DataMember] public string FirstName { get; set; } [DataMember] public string LastName { get; set; } [DataMember] private IDictionary<long, string> customValues; public IDictionary<long, string> CustomValues { get { return customValues; } set { customValues = value; } } } My XML File: <?xml version="1.0" encoding="UTF-8"?> <SomeClass> <FirstName>John</FirstName> <LastName>Smith</LastName> <CustomValues> <Value1>One</Value1> <Value2>Two</Value2> </CustomValues > </SomeClass> But my problem is for the class, i am only getting some of the data for my methods when i deserialize. var xmlRoot = XElement.Load(new StreamReader( filterContext.HttpContext.Request.InputStream, filterContext.HttpContext.Request.ContentEncoding)); XmlDictionaryReader reader = XmlDictionaryReader.CreateDictionaryReader(xmlRoot.CreateReader()); DataContractSerializer ser = new DataContractSerializer(typeof(SomeClass)); //Deserialize the data and read it from the instance. SomeClass someClass = (SomeClass)ser.ReadObject(reader, true); So when I check "someClass", FirstName will have the value john, But the LastName will be null. Mystery is how can i get some of the data and not all of the data for the class. So DataContractSerializer is not pulling up all the data from xml when deserializing. Am i doing something wrong. Any help is appreciated. Thanks in advance. Let me know if anyone has the same problem or any one has solution

    Read the article

  • Invalid message signature when running OpenId Provider on Cluster

    - by Garth
    Introduction We have an OpenID Provider which we created using the DotNetOpenAuth component. Everything works great when we run the provider on a single node, but when we move the provider to a load balanced cluster where multiple servers are handling requests for each session we get issue with the message signing as the DotNetOpenAuth component seems to be using something unique from each cluster node to create the signature. Exception DotNetOpenAuth.Messaging.Bindings.InvalidSignatureException: Message signature was incorrect. at DotNetOpenAuth.OpenId.ChannelElements.SigningBindingElement.ProcessIncomingMessage(IProtocolMessage message) in c:\BuildAgent\work\7ab20c0d948e028f\src\DotNetOpenAuth\OpenId\ChannelElements\SigningBindingElement.cs:line 139 at DotNetOpenAuth.Messaging.Channel.ProcessIncomingMessage(IProtocolMessage message) in c:\BuildAgent\work\7ab20c0d948e028f\src\DotNetOpenAuth\Messaging\Channel.cs:line 940 at DotNetOpenAuth.OpenId.ChannelElements.OpenIdChannel.ProcessIncomingMessage(IProtocolMessage message) in c:\BuildAgent\work\7ab20c0d948e028f\src\DotNetOpenAuth\OpenId\ChannelElements\OpenIdChannel.cs:line 172 at DotNetOpenAuth.Messaging.Channel.ReadFromRequest(HttpRequestInfo httpRequest) in c:\BuildAgent\work\7ab20c0d948e028f\src\DotNetOpenAuth\Messaging\Channel.cs:line 378 at DotNetOpenAuth.OpenId.RelyingParty.OpenIdRelyingParty.GetResponse(HttpRequestInfo httpRequestInfo) in c:\BuildAgent\work\7ab20c0d948e028f\src\DotNetOpenAuth\OpenId\RelyingParty\OpenIdRelyingParty.cs:line 493 Setup We have the machine config setup to use the same machine key on all cluster nodes and we have setup an out of process session with SQL Server. Question How do we configure the key used by DotNetOpenAuth to sign its messages so that the client will trust responses from all servers in the cluster during the same session?

    Read the article

  • Cast Graphics to Image in C#

    - by WebDevHobo
    I have a pictureBox on a Windows Form. I do the following to load a PNG file into it. Bitmap bm = (Bitmap)Image.FromFile("Image.PNG", true); Bitmap tmp; public Form1() { InitializeComponent(); this.tmp = new Bitmap(bm.Width, bm.Height); } private void pictureBox1_Paint(object sender, PaintEventArgs e) { e.Graphics.DrawImage(this.bm, new Rectangle(0, 0, tmp.Width, tmp.Height), 0, 0, tmp.Width, tmp.Height, GraphicsUnit.Pixel); } However, I need to draw things on the image and then have the result displayed again. Drawing rectangles can only be done via the Graphics class. I'd need to draw the needed rectangles on the image, make it an instance of the Image class again and save that to this.bm I can add a button that executes this.pictureBox1.Refresh();, forcing the pictureBox to be painted again, but I can't cast Graphics to Image. Because of that, I can't save the edits to the this.bm bitmap. That's my problem, and I see no way out.

    Read the article

  • Asynchronous HTTP Client for Java

    - by helifreak
    As a relative newbie in the Java world, I am finding many things frustratingly obtuse to accomplish that are relatively trivial in many other frameworks. A primary example is a simple solution for asynchronous http requests. Seeing as one doesn't seem to already exist, what is the best approach? Creating my own threads using a blocking type lib like httpclient or the built-in java http stuff, or should I use the newer non-blocking io java stuff - it seems particularly complex for something which should be simple. What I am looking for is something easy to use from a developer point of view - something similar to URLLoader in AS3 - where you simply create a URLRequest - attach a bunch of event handlers to handle the completion, errors, progress, etc, and call a method to fire it off. If you are not familiar with URLLoader in AS3, its so super easy and looks something like this: private void getURL(String url) { URLLoader loader = new URLLoader(); loader.addEventListener(Event.Complete, completeHandler); loader.addEventListener(HTTPStatusEvent.HTTP_STATUS, httpStatusHandler); loader.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler); URLRequest request = new URLRequest(url); // fire it off - this is asynchronous so we handle // completion with event handlers loader.load(request); } private void completeHandler(Event event) { URLLoader loader = (URLLoader)event.target; Object results = loader.data; // process results } private void httpStatusHandler(Event event) { // check status code } private void ioErrorHandler(Event event) { // handle errors }

    Read the article

  • unexplainable packet drops with 5 ethernet NICs and low traffic on Ubuntu

    - by jon
    I'm stuck on problem where my machine started to drops packets with no sign of ANY system load or high interrupt usage after an upgrade to Ubuntu 12.04. My server is a network monitoring sensor, running Ubuntu LTS 12.04, it passively collects packets from 5 interfaces doing network intrusion type stuff. Before the upgrade I managed to collect 200+GB of packets a day while writing them to disk with around 0% packet loss depending on the day with the help of CPU affinity and NIC IRQ to CPU bindings. Now I lose a great deal of packets with none of my applications running and at very low PPS rate which a modern workstation NIC would have no trouble with. Specs: x64 Xeon 4 cores 3.2 Ghz 16 GB RAM NICs: 5 Intel Pro NICs using the e1000 driver (NAPI). [1] eth0 and eth1 are integrated NICs (in the motherboard) There are 2 other PCI-X network cards, each with 2 Ethernet ports. 3 of the interfaces are running at Gigabit Ethernet, the others are not because they're attached to hubs. Specs: [2] http://support.dell.com/support/edocs/systems/pe2850/en/ug/t1390aa.htm uptime 17:36:00 up 1:43, 2 users, load average: 0.00, 0.01, 0.05 # uname -a Linux nms 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux I also have the CPU governor set to performance mode and irqbalance off. The problem still occurs with them on. # lspci -t -vv -[0000:00]-+-00.0 Intel Corporation E7520 Memory Controller Hub +-02.0-[01-03]--+-00.0-[02]----0e.0 Dell PowerEdge Expandable RAID controller 4 | \-00.2-[03]-- +-04.0-[04]-- +-05.0-[05-07]--+-00.0-[06]----07.0 Intel Corporation 82541GI Gigabit Ethernet Controller | \-00.2-[07]----08.0 Intel Corporation 82541GI Gigabit Ethernet Controller +-06.0-[08-0a]--+-00.0-[09]--+-04.0 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | | \-04.1 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | \-00.2-[0a]--+-02.0 Digium, Inc. Wildcard TE210P/TE212P dual-span T1/E1/J1 card 3.3V | +-03.0 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) | \-03.1 Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) +-1d.0 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #1 +-1d.1 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #2 +-1d.2 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #3 +-1d.7 Intel Corporation 82801EB/ER (ICH5/ICH5R) USB2 EHCI Controller +-1e.0-[0b]----0d.0 Advanced Micro Devices [AMD] nee ATI RV100 QY [Radeon 7000/VE] +-1f.0 Intel Corporation 82801EB/ER (ICH5/ICH5R) LPC Interface Bridge \-1f.1 Intel Corporation 82801EB/ER (ICH5/ICH5R) IDE Controller I believe the NIC nor the NIC drivers are dropping the packets because ethtool reports 0 under rx_missed_errors and rx_no_buffer_count for each interface. On the old system, if it couldn't keep up this is where the drops would be. I drop packets on multiple interfaces just about every second, usually in small increments of 2-4. I tried all these sysctl values, I'm currently using the uncommented ones. # cat /etc/sysctl.conf # high net.core.netdev_max_backlog = 3000000 net.core.rmem_max = 16000000 net.core.rmem_default = 8000000 # defaults #net.core.netdev_max_backlog = 1000 #net.core.rmem_max = 131071 #net.core.rmem_default = 163480 # moderate #net.core.netdev_max_backlog = 10000 #net.core.rmem_max = 33554432 #net.core.rmem_default = 33554432 Here's an example of an interface stats report with ethtool. They are all the same, nothing is out of the ordinary ( I think ), so I'm only going to show one: ethtool -S eth2 NIC statistics: rx_packets: 7498 tx_packets: 0 rx_bytes: 2722585 tx_bytes: 0 rx_broadcast: 327 tx_broadcast: 0 rx_multicast: 1504 tx_multicast: 0 rx_errors: 0 tx_errors: 0 tx_dropped: 0 multicast: 1504 collisions: 0 rx_length_errors: 0 rx_over_errors: 0 rx_crc_errors: 0 rx_frame_errors: 0 rx_no_buffer_count: 0 rx_missed_errors: 0 tx_aborted_errors: 0 tx_carrier_errors: 0 tx_fifo_errors: 0 tx_heartbeat_errors: 0 tx_window_errors: 0 tx_abort_late_coll: 0 tx_deferred_ok: 0 tx_single_coll_ok: 0 tx_multi_coll_ok: 0 tx_timeout_count: 0 tx_restart_queue: 0 rx_long_length_errors: 0 rx_short_length_errors: 0 rx_align_errors: 0 tx_tcp_seg_good: 0 tx_tcp_seg_failed: 0 rx_flow_control_xon: 0 rx_flow_control_xoff: 0 tx_flow_control_xon: 0 tx_flow_control_xoff: 0 rx_long_byte_count: 2722585 rx_csum_offload_good: 0 rx_csum_offload_errors: 0 alloc_rx_buff_failed: 0 tx_smbus: 0 rx_smbus: 0 dropped_smbus: 01 # ifconfig eth0 Link encap:Ethernet HWaddr 00:11:43:e0:e2:8c UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:373348 errors:16 dropped:95 overruns:0 frame:16 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:356830572 (356.8 MB) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 00:11:43:e0:e2:8d UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:13616 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8690528 (8.6 MB) TX bytes:0 (0.0 B) eth2 Link encap:Ethernet HWaddr 00:04:23:e1:77:6a UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:7750 errors:0 dropped:471 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2780935 (2.7 MB) TX bytes:0 (0.0 B) eth3 Link encap:Ethernet HWaddr 00:04:23:e1:77:6b UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:5112 errors:0 dropped:206 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:639472 (639.4 KB) TX bytes:0 (0.0 B) eth4 Link encap:Ethernet HWaddr 00:04:23:b6:35:6c UP BROADCAST RUNNING NOARP PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:961467 errors:0 dropped:935 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:958561305 (958.5 MB) TX bytes:0 (0.0 B) eth5 Link encap:Ethernet HWaddr 00:04:23:b6:35:6d inet addr:192.168.1.6 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4264 errors:0 dropped:16 overruns:0 frame:0 TX packets:699 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:572228 (572.2 KB) TX bytes:124456 (124.4 KB) I tried the defaults, then started to play around with settings. I wasn't using any flow control and I increased the RxDescriptor count to 4096 before the upgrade as well without any problems. # cat /etc/modprobe.d/e1000.conf options e1000 XsumRX=0,0,0,0,0 RxDescriptors=4096,4096,4096,4096,4096 FlowControl=0,0,0,0,0 debug=16 Here's my network configuration file, I turned off checksumming and various offloading mechanisms along with setting CPU affinity with heavy use interfaces getting an entire CPU and light use interfaces sharing a CPU. I used these settings prior to the upgrade without problems. # cat /etc/network/interfaces # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet manual pre-up /sbin/ethtool -G eth0 rx 4096 tx 0 pre-up /sbin/ethtool -K eth0 gro off gso off rx off pre-up /sbin/ethtool -A eth0 rx off autoneg off up ifconfig eth0 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/48/smp_affinity down ifconfig eth0 down post-down /sbin/ethtool -G eth0 rx 256 tx 256 post-down /sbin/ethtool -K eth0 gro on gso on rx on post-down /sbin/ethtool -A eth0 rx on autoneg on auto eth1 iface eth1 inet manual pre-up /sbin/ethtool -G eth1 rx 4096 tx 0 pre-up /sbin/ethtool -K eth1 gro off gso off rx off pre-up /sbin/ethtool -A eth1 rx off autoneg off up ifconfig eth1 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/49/smp_affinity down ifconfig eth1 down post-down /sbin/ethtool -G eth1 rx 256 tx 256 post-down /sbin/ethtool -K eth1 gro on gso on rx on post-down /sbin/ethtool -A eth1 rx on autoneg on auto eth2 iface eth2 inet manual pre-up /sbin/ethtool -G eth2 rx 4096 tx 0 pre-up /sbin/ethtool -K eth2 gro off gso off rx off pre-up /sbin/ethtool -A eth2 rx off autoneg off up ifconfig eth2 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "1" > /proc/irq/82/smp_affinity down ifconfig eth2 down post-down /sbin/ethtool -G eth2 rx 256 tx 256 post-down /sbin/ethtool -K eth2 gro on gso on rx on post-down /sbin/ethtool -A eth2 rx on autoneg on auto eth3 iface eth3 inet manual pre-up /sbin/ethtool -G eth3 rx 4096 tx 0 pre-up /sbin/ethtool -K eth3 gro off gso off rx off pre-up /sbin/ethtool -A eth3 rx off autoneg off up ifconfig eth3 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "2" > /proc/irq/83/smp_affinity down ifconfig eth3 down post-down /sbin/ethtool -G eth3 rx 256 tx 256 post-down /sbin/ethtool -K eth3 gro on gso on rx on post-down /sbin/ethtool -A eth3 rx on autoneg on auto eth4 iface eth4 inet manual pre-up /sbin/ethtool -G eth4 rx 4096 tx 0 pre-up /sbin/ethtool -K eth4 gro off gso off rx off pre-up /sbin/ethtool -A eth4 rx off autoneg off up ifconfig eth4 0.0.0.0 -arp promisc mtu 1500 allmulti txqueuelen 0 up post-up echo "4" > /proc/irq/77/smp_affinity down ifconfig eth4 down post-down /sbin/ethtool -G eth4 rx 256 tx 256 post-down /sbin/ethtool -K eth4 gro on gso on rx on post-down /sbin/ethtool -A eth4 rx on autoneg on auto eth5 iface eth5 inet static pre-up /etc/fw.conf address 192.168.1.1 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 192.168.1.2 192.168.1.3 up ifconfig eth5 up post-up echo "8" > /proc/irq/77/smp_affinity down ifconfig eth5 down Here's a few examples of packet drops, i ran one after another, probabling totaling 3 or 4 seconds. You can see increases in the drops from the 1st and 3rd. This was a non-busy time, very little traffic. # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 225 lo: 0 eth2: 505 eth1: 0 eth5: 17 eth0: 105 eth4: 1034 # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 225 lo: 0 eth2: 507 eth1: 0 eth5: 17 eth0: 105 eth4: 1034 # awk '{ print $1,$5 }' /proc/net/dev Inter-| face drop eth3: 227 lo: 0 eth2: 512 eth1: 0 eth5: 17 eth0: 105 eth4: 1039 I tried the pci=noacpi options. With and without, it's the same. This is what my interrupt stats looked like before the upgrade, after, with ACPI on PCI it showed multiple NICs bound to an interrupt and shared with other devices such as USB drives which I didn't like so I think i'm going to keep it with ACPI off as it's easier to designate sole purpose interrupts. Is there any advantage I would have using the default i.e. ACPI w/ PCI. ? # cat /etc/default/grub | grep CMD_LINE GRUB_CMDLINE_LINUX_DEFAULT="ipv6.disable=1 noacpi pci=noacpi" GRUB_CMDLINE_LINUX="" # cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 0: 45 0 0 16 IO-APIC-edge timer 1: 1 0 0 7936 IO-APIC-edge i8042 2: 0 0 0 0 XT-PIC-XT-PIC cascade 6: 0 0 0 3 IO-APIC-edge floppy 8: 0 0 0 1 IO-APIC-edge rtc0 9: 0 0 0 0 IO-APIC-edge acpi 12: 0 0 0 1809 IO-APIC-edge i8042 14: 1 0 0 4498 IO-APIC-edge ata_piix 15: 0 0 0 0 IO-APIC-edge ata_piix 16: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb2 18: 0 0 0 1350 IO-APIC-fasteoi uhci_hcd:usb4, radeon 19: 0 0 0 0 IO-APIC-fasteoi uhci_hcd:usb3 23: 0 0 0 4099 IO-APIC-fasteoi ehci_hcd:usb1 38: 0 0 0 61963 IO-APIC-fasteoi megaraid 48: 0 0 1002319 4 IO-APIC-fasteoi eth0 49: 0 0 38772 3 IO-APIC-fasteoi eth1 77: 0 0 130076 432159 IO-APIC-fasteoi eth4 78: 0 0 0 23917 IO-APIC-fasteoi eth5 82: 1329033 0 0 4 IO-APIC-fasteoi eth2 83: 0 4886525 0 6 IO-APIC-fasteoi eth3 NMI: 5 6 4 5 Non-maskable interrupts LOC: 61409 57076 64257 114764 Local timer interrupts SPU: 0 0 0 0 Spurious interrupts IWI: 0 0 0 0 IRQ work interrupts RES: 17956 25333 13436 14789 Rescheduling interrupts CAL: 22436 607 539 478 Function call interrupts TLB: 1525 1458 4600 4151 TLB shootdowns TRM: 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 Threshold APIC interrupts MCE: 0 0 0 0 Machine check exceptions MCP: 16 16 16 16 Machine check polls ERR: 0 MIS: 0 Here's sample output of vmstat, showing the system. Barebones system right now. root@nms:~# vmstat -S m 1 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 14992 192 1029 0 0 56 2 419 29 1 0 99 0 0 0 0 14992 192 1029 0 0 0 0 922 27 0 0 100 0 0 0 0 14991 192 1029 0 0 0 36 763 50 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 646 35 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 722 54 0 0 100 0 0 0 0 14991 192 1029 0 0 0 0 793 27 0 0 100 0 ^C Here's dmesg output. I can't figure out why my PCI-X slots are negotiated as PCI. The network cards are all PCI-X with the exception of the integrated NICs that came with the server. In the output below it looks as if eth3 and eth2 negotiated at PCI-X speeds rather than PCI:66Mhz. Wouldn't they all drop to PCI:66Mhz? If your integrated NICs are PCI, as labeled below (eth0,eth1), then wouldn't all devices on your bus speed drop down to that slower bus speed? If not, I still don't know why only one of my NICs ( each has two ethernet ports) is labeled as PCI-X in the output below. Does that mean it is running at PCI-X speeds are is it showing that it's capable? # dmesg | grep e1000 [ 3678.349337] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI [ 3678.349342] e1000: Copyright (c) 1999-2006 Intel Corporation. [ 3678.349394] e1000 0000:06:07.0: PCI->APIC IRQ transform: INT A -> IRQ 48 [ 3678.409725] e1000 0000:06:07.0: Receive Descriptors set to 4096 [ 3678.409730] e1000 0000:06:07.0: Checksum Offload Disabled [ 3678.409734] e1000 0000:06:07.0: Flow Control Disabled [ 3678.586409] e1000 0000:06:07.0: eth0: (PCI:66MHz:32-bit) 00:11:43:e0:e2:8c [ 3678.586419] e1000 0000:06:07.0: eth0: Intel(R) PRO/1000 Network Connection [ 3678.586642] e1000 0000:07:08.0: PCI->APIC IRQ transform: INT A -> IRQ 49 [ 3678.649854] e1000 0000:07:08.0: Receive Descriptors set to 4096 [ 3678.649859] e1000 0000:07:08.0: Checksum Offload Disabled [ 3678.649863] e1000 0000:07:08.0: Flow Control Disabled [ 3678.826436] e1000 0000:07:08.0: eth1: (PCI:66MHz:32-bit) 00:11:43:e0:e2:8d [ 3678.826444] e1000 0000:07:08.0: eth1: Intel(R) PRO/1000 Network Connection [ 3678.826627] e1000 0000:09:04.0: PCI->APIC IRQ transform: INT A -> IRQ 82 [ 3679.093266] e1000 0000:09:04.0: Receive Descriptors set to 4096 [ 3679.093271] e1000 0000:09:04.0: Checksum Offload Disabled [ 3679.093275] e1000 0000:09:04.0: Flow Control Disabled [ 3679.130239] e1000 0000:09:04.0: eth2: (PCI-X:133MHz:64-bit) 00:04:23:e1:77:6a [ 3679.130246] e1000 0000:09:04.0: eth2: Intel(R) PRO/1000 Network Connection [ 3679.130449] e1000 0000:09:04.1: PCI->APIC IRQ transform: INT B -> IRQ 83 [ 3679.397312] e1000 0000:09:04.1: Receive Descriptors set to 4096 [ 3679.397318] e1000 0000:09:04.1: Checksum Offload Disabled [ 3679.397321] e1000 0000:09:04.1: Flow Control Disabled [ 3679.434350] e1000 0000:09:04.1: eth3: (PCI-X:133MHz:64-bit) 00:04:23:e1:77:6b [ 3679.434360] e1000 0000:09:04.1: eth3: Intel(R) PRO/1000 Network Connection [ 3679.434553] e1000 0000:0a:03.0: PCI->APIC IRQ transform: INT A -> IRQ 77 [ 3679.704072] e1000 0000:0a:03.0: Receive Descriptors set to 4096 [ 3679.704077] e1000 0000:0a:03.0: Checksum Offload Disabled [ 3679.704081] e1000 0000:0a:03.0: Flow Control Disabled [ 3679.738364] e1000 0000:0a:03.0: eth4: (PCI:33MHz:64-bit) 00:04:23:b6:35:6c [ 3679.738371] e1000 0000:0a:03.0: eth4: Intel(R) PRO/1000 Network Connection [ 3679.738538] e1000 0000:0a:03.1: PCI->APIC IRQ transform: INT B -> IRQ 78 [ 3680.046060] e1000 0000:0a:03.1: eth5: (PCI:33MHz:64-bit) 00:04:23:b6:35:6d [ 3680.046067] e1000 0000:0a:03.1: eth5: Intel(R) PRO/1000 Network Connection [ 3682.132415] e1000: eth0 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.224423] e1000: eth1 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.316385] e1000: eth2 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None [ 3682.408391] e1000: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 3682.500396] e1000: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 3682.708401] e1000: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX At first I thought it was the NIC drivers but I'm not so sure. I really have no idea where else to look at the moment. Any help is greatly appreciated as I'm struggling with this. If you need more information just ask. Thanks! [1]http://www.cs.fsu.edu/~baker/devices/lxr/http/source/linux/Documentation/networking/e1000.txt?v=2.6.11.8 [2] http://support.dell.com/support/edocs/systems/pe2850/en/ug/t1390aa.htm

    Read the article

  • How to utilize WebDev.WebServer.exe (VS Web Server) in x64?

    - by Nick Craver
    Visual Studio is x86 until at least the 2010 release comes around, my question is can anyone think of a way or know of an independent ASP.NET debug server that's x64 for 2008? Background: Our ASP.NET application runs against Oracle as the DB. Since we're on 64-bit servers for memory concerns later, we need to use Oracle's 64-bit drivers (Instant Client). Setup: x64 OS (XP or Windows 7) IIS (5 or 7, both x64 App Pools) Oracle 64-bit Instant Client (Separate Directory, in the PATH) Visual Studio 2008 SP1 In IIS the application pool runs as 64-bit, uses the Oracle drivers as intended, however since WebDev.WebServer.exe is 32-bit you'll get a BadImageFormatException because it's trying to load 64-bit driver DLLs in a 32-bit environment. All of our developers would like to be able to use the quick debug server via Visual Studio 2008, but since it runs as 32-bit we're unable to. Some problems we run into are during application startup, so although we're attaching to the IIS process sometimes that isn't enough to track an issue down. Are there any alternatives, or work-arounds? We would like to match our Dev/Val/Prod tiers as much as possible, so everything running in x64 would be ideal.

    Read the article

  • Reattaching an object graph to an EntityContext: "cannot track multiple objects with the same key"

    - by dkr88
    Can EF really be this bad? Maybe... Let's say I have a fully loaded, disconnected object graph that looks like this: myReport = {Report} {ReportEdit {User: "JohnDoe"}} {ReportEdit {User: "JohnDoe"}} Basically a report with 2 edits that were done by the same user. And then I do this: EntityContext.Attach(myReport); InvalidOperationException: An object with the same key already exists in the ObjectStateManager. The ObjectStateManager cannot track multiple objects with the same key. Why? Because the EF is trying to attach the {User: "JohnDoe"} entity TWICE. This will work: myReport = {Report} {ReportEdit {User: "JohnDoe"}} EntityContext.Attach(myReport); No problems here because the {User: "JohnDoe"} entity only appears in the object graph once. What's more, since you can't control how the EF attaches an entity, there is no way to stop it from attaching the entire object graph. So really if you want to reattach a complex entity that contains more than one reference to the same entity... well, good luck. At least that's how it looks to me. Any comments? UPDATE: Added sample code: // Load the report Report theReport; using (var context1 = new TestEntities()) { context1.Reports.MergeOption = MergeOption.NoTracking; theReport = (from r in context1.Reports.Include("ReportEdits.User") where r.Id == reportId select r).First(); } // theReport looks like this: // {Report[Id=1]} // {ReportEdit[Id=1] {User[Id=1,Name="John Doe"]} // {ReportEdit[Id=2] {User[Id=1,Name="John Doe"]} // Try to re-attach the report object graph using (var context2 = new TestEntities()) { context2.Attach(theReport); // InvalidOperationException }

    Read the article

  • [Gray Hat Python] Simple debugger, want work ??

    - by Rami Jarrar
    hi, i'm reading the Gray Hat Python,, i reach for this :: class debugger(): def __init__(self): self.h_process = None self.pid = None self.debugger_active = False def load(self,path_to_exe): creation_flags = DEBUG_PROCESS startupinfo = STARTUPINFO() process_information = PROCESS_INFORMATION() startupinfo.dwFlags = 0x1 startupinfo.wShowWindows = 0x0 startupinfo.cb = sizeof(startupinfo) if kernel32.CreateProcessA(path_to_exe, None, None, None, None, creation_flags, None, None, byref(startupinfo), byref(process_information)): print "[*] We have successfully launched the process!" print "[*] PID: %d"%(process_information.dwProcessId) self.h_process = self.open_process(process_information.dwProcessId) else: print "[*] Error: 0x%08x."%(kernel32.GetLastError()) def open_process(self,pid): h_process = self.open_process(pid) if kernel32.DebugActiveProcess(pid): self.debugger_active = True self.pid = int(pid) self.run() else: print "[*] Unable to attach to the process." def run(self): while self.debugger_active == True: self.get_debug_event() def get_debug_event(self): debug_event = DEBUG_EVENT() continue_status = DBG_CONTINUE if kernel32.WaitForDebugEvent(byref(debug_event), INFINITE): raw_input("Press a Key to continue...") self.debugger_active = False kernel32.ContinueDebugEvent( \ debug_event.dwProcessId, \ debug_event.dwThreadId, \ continue_status ) def detach(self): if kernel32.DebugActiveProcessStop(self.pid): print "[*] Finished debugging. Exiting..." return True else: print "There was an error" return False when run my_test.py :: import my_dbg debugger = my_dbg.debugger() pid = raw_input('Enter the PID of the process to attach to: ') debugger.open_process(int(pid)) debugger.detach() i get this error :: Traceback (most recent call last): File "C:/Python26/dbgpy/my_test.py", line 5, in <module> debugger.attach(int(pid)) File "C:/Python26/dbgpy\my_dbg.py", line 37, in attach h_process = self.attach(pid) ........... ........... ........... File "C:/Python26/dbgpy\my_dbg.py", line 37, in attach h_process = self.attach(pid) File "C:/Python26/dbgpy\my_dbg.py", line 37, in attach h_process = self.attach(pid) RuntimeError: maximum recursion depth exceeded its because the loop and something else, but what it is ?? I'm running on Windows using Python2.6.4.. :) Update:: i remove h_process = self.open_process(pid), but i get the same error for the next instruction if kernel32.DebugActiveProcess(pid) , so the problem i think in the loop while,, but what it is ???

    Read the article

  • Slimbox 2 Plugin, jQuery Flickr, and IE8

    - by Nick H.
    Hello, I am currently developing a site that I make use of two jQuery plugins: Flickr jquery plugin (http://code.google.com/p/jquery-flickr/) Slimbox (http://code.google.com/p/slimbox/) The first plugin is used to pull in flickr photos from a specific account. These photos are displayed as thumbnails on the page. I am then using the second plugin to display larger views of these images. Because the flickr photos are fetched when the page loads, I am calling the Slimbox 2 function like this: $(document).ready(function() { $("#Flickr").flickr(); //Call Flickr plugin $(window).bind('load', function() { $("#Flickr a").slimbox();//Call Slimbox2 }); }); On first testing this seemed to have worked perfectly. I tested multiple versions of FireFox, IE7, IE6, and Safari. Everything is great. However, the Slimbox lightbox effest does not work in IE8. However, if I put IE8 into compatibility mode, everything works as expected. I would like to avoid forcing compatibility mode. There are no javascript errors and I am at a loss for testing. Here is a link to a sample: http://www.njhall.com/JRMcCourt-Builders/index.html#ourwork Any advice would be greatly appreciated. Thanks Nick

    Read the article

  • Ruby - Feedzirra and updates

    - by mplacona
    Hi, trying to get my head around Feedzirra here. I have it all setup and everything, and can even get results and updates, but something odd is going on. I came up with the following code: def initialize(feed_url) @feed_url = feed_url @rssObject = Feedzirra::Feed.fetch_and_parse(@feed_url) end def update_from_feed_continuously() @rssObject = Feedzirra::Feed.update(@rssObject) if @rssObject.updated? puts @rssObject.new_entries.count else puts "nil" end end Right, what I'm doing above, is starting with the big feed, and then only getting updates. I'm sure I must be doing something stupid, as even though I'm able to get the updates, and store them on the same instance variable, after the first time, I'm never able to get those again. Obviously this happens because I'm overwriting my instance variable with only updates, and lose the full feed object. I then thought about changing my code to this: def update_from_feed_continuously() feed = Feedzirra::Feed.update(@rssObject) if feed.updated? puts feed.new_entries.count else puts "nil" end end Well, I'm not overwriting anything and that should be the way to go right? WRONG, this means I'm doomed to always try to get updates to the same static feed object, as although I get the updates on a variable, I'm never actually updating my "static feed object", and newly added items will be appended to my "feed.new_entries" as they in theory are new. I'm sure I;m missing a step here, but I'd really appreciate if someone could shed me a light on it. I've been going through this code for hours, and can't get to grips with it. Obviously it should work fine, if I did something like: if feed.updated? puts feed.new_entries.count @rssObject = initialize(@feed_url) else Because that would reinitialize my instance variable with a brand new feed object, and the updates would come again. But that also means that any new update added on that exact moment would be lost, as well as massive overkill, as I'd have to load the thing again. Thanks in advance!

    Read the article

  • jQuery AJAX request (Rails 3) gets redirected and returns empty message body (only with SSL)!

    - by elsurudo
    I'm trying to do a manual jQuery AJAX request the following way: $("#user_plan_id").change(function() { $("#plan_container").load('/plans/' + this.value); }); I have the "rails.js" file included in my header, and a "<%= csrf_meta_tag %". I see from my log that the request IS getting to the server (although without the authenticity token... does rails.js even do this?), but the response is a 302 (Found) rather than 200, and no data actually gets rendered. Any ideas? Edit: I now see that the first request redirects, and the proper partial gets rendered on the redirect. However, the 2nd response's body (on the client-side) is still empty. I'm guessing jQuery uses the first response and doesn't have a listener set up for the redirect. How do I get around this? Also, another note: the page doing the requesting is an HTTPS page. Here is what my log says: Started GET "/plans/221168073" for 127.0.0.1 at Tue Jun 15 01:24:06 -0400 2010 Processing by PlansController#show as HTML Parameters: {"id"=>"221168073"} DEPRECATION WARNING: Using #request_uri is deprecated. Use fullpath instead. (called from ensure_proper_protocol at /Users/ernestsurudo/Sites/vidfolia/vendor/plugins/ssl_requirement/lib/ssl_requirement.rb:57) Redirected to http://vidfolia.com/plans/221168073 Completed 302 Found in 1ms It turns out that if I turn off SSL requirement for that page, it works! I still have no idea why, though. So I suppose my question is: what is the workaround?

    Read the article

  • Capturing unhandled exceptions in .Net 2.0. Wrong event called.

    - by SoMoS
    Hello, I'm investigating a bit about how the unhandled exceptions are managed in .Net and I'm getting unexpected results that I would like to share with you to see what do you think about. The first one is pretty simple to see. I wrote this code to do the test, just a button that throws an Exception on the same thread that created the Form: Public Class Form1 Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Throw New Exception() End Sub Private Sub UnhandledException(ByVal sender As Object, ByVal e As UnhandledExceptionEventArgs) MsgBox(String.Format("Exception: {0}. Ending: {1}. AppDomain: {2}", CType(e.ExceptionObject, Exception).Message, e.IsTerminating.ToString(), AppDomain.CurrentDomain.FriendlyName)) End Sub Private Sub UnhandledThreadException(ByVal sender As Object, ByVal e As System.Threading.ThreadExceptionEventArgs) MsgBox(String.Format("Exception: {0}. AppDomain: {1}", e.Exception.Message(), AppDomain.CurrentDomain.FriendlyName)) End Sub Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load AddHandler AppDomain.CurrentDomain.UnhandledException, AddressOf UnhandledException AddHandler Application.ThreadException, AddressOf UnhandledThreadException End Sub End Class When I execute the code inside the Visual Studio the UnhandledException is called as expected but when I execute the application from Windows the UndhanledThreadException is called instead. ¿?¿?¿¿?¿? Someone has any idea of what can be happening here? Thanks in advance.

    Read the article

  • Problem with cucumber

    - by sev
    I want to make rails app which will require minimum gems. I freeze gems into app and try to run cucumber's test and I've got the an error. Below is sequence of my actions. What I do wrong? rails cucumber && cd cucumber rake rails:freeze:gems add at the end of config/environments/test.rb: config.gem 'gherkin' config.gem 'cucumber-rails' config.gem 'database_cleaner' config.gem 'webrat' rake gems:unpack:dependencies RAILS_ENV=test rake gems:build RAILS_ENV=test rake gems RAILS_ENV=test [F] gherkin [F] trollop = 1.16.2 [F] cucumber-rails [F] cucumber = 0.8.0 [F] gherkin = 1.0.30 [F] trollop = 1.16.2 [F] term-ansicolor = 1.0.4 [F] builder = 2.1.2 [F] diff-lcs = 1.1.2 [F] json_pure = 1.4.3 [F] database_cleaner [F] webrat [F] nokogiri = 1.2.0 [F] rack = 1.0 [F] rack-test = 0.5.3 [F] rack = 1.0 script/generate cucumber rake db:migrate gem uninstall builder cucumber cucumber-rails diff-lcs gherkin json_pure nokogiri rack-test term-ansicolor trollop webrat rake cucumber /usr/bin/ruby1.8 -I "cucumber/vendor/gems/cucumber-0.8.0/lib:lib" "cucumber/vendor/gems/cucumber-0.8.0/bin/cucumber" --profile default /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- gherkin (LoadError) from /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in require' from cucumber/vendor/gems/cucumber-0.8.0/bin/../lib/cucumber/cli/main.rb:5 from cucumber/vendor/gems/cucumber-0.8.0/bin/cucumber:5:inrequire' from cucumber/vendor/gems/cucumber-0.8.0/bin/cucumber:5 rake aborted! Command failed with status (1): [/usr/bin/ruby1.8 -I "cucumbe...] (See full trace by running task with --trace)

    Read the article

  • The file is damaged and could not be repaired

    - by acadia
    Hello Experts, I am trying to display a PDF file in my ASP.net page based on the binary data received from the ASP.net Web service. Below is the code. though I am getting the data from the Web Service for some reason, if I run the below mentioned code on page load I am getting the above mentioned error. Please help Response.Buffer = True Response.ContentType = "application/pdf" Response.AddHeader("Content-Disposition", "Inline") Dim ws As New imageGenService.Service1 Dim imagebyte As Byte() = Nothing imagebyte = ws.generateSamplePDF() If imagebyte IsNot Nothing Then '"attachment; filename=Whatever.pdf" Dim MemStream As New System.IO.MemoryStream Dim doc As New iTextSharp.text.Document Dim reader As iTextSharp.text.pdf.PdfReader Dim numberOfPages As Integer Dim currentPageNumber As Integer Dim writer As iTextSharp.text.pdf.PdfWriter = iTextSharp.text.pdf.PdfWriter.GetInstance(doc, MemStream) doc.Open() Dim cb As iTextSharp.text.pdf.PdfContentByte = writer.DirectContent Dim page As iTextSharp.text.pdf.PdfImportedPage Dim rotation As Integer reader = New iTextSharp.text.pdf.PdfReader(imagebyte) numberOfPages = reader.NumberOfPages currentPageNumber = 0 Do While (currentPageNumber < numberOfPages) currentPageNumber += 1 doc.SetPageSize(PageSize.LETTER) doc.NewPage() page = writer.GetImportedPage(reader, currentPageNumber) rotation = reader.GetPageRotation(currentPageNumber) If (rotation = 90) Or (rotation = 270) Then cb.AddTemplate(page, 0, -1.0F, 1.0F, 0, 0, reader.GetPageSizeWithRotation(currentPageNumber).Height) Else cb.AddTemplate(page, 1.0F, 0, 0, 1.0F, 0, 0) End If Loop If MemStream Is Nothing Then Response.Write("No Data is available for output") Else Response.BinaryWrite(MemStream.GetBuffer()) End If End If

    Read the article

  • Problem with textbox inside updatepanel - not causing OnTextChanged event

    - by DaDa
    I have the following situation: I have a textbox inside an ajax updatepanel. Wherever the user types in the textbox I must display a message (different message that depends on the user typed data). <asp:UpdatePanel ID="UpdatePanel1" runat="server" UpdateMode="Always"> <ContentTemplate> <asp:TextBox ID="txtMyTexbox" runat="server" Width="500px" OnTextChanged="txtMyTexbox_TextChanged" AutoPostBack="true"></asp:TextBox> <br /> <asp:Label ID="lblMessage" runat="server" CssClass="errorMessage" Visible="false">Hello World</asp:Label> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="txtMyTexbox" /> </Triggers> </asp:UpdatePanel> In server side I have written the following at page load ScriptManager.GetCurrent(this).RegisterAsyncPostBackControl(txtMyTexbox); and the method like this protected void txtMyTexbox_TextChanged(object sender, EventArgs e) { if (.....) { lblMessage.Visible = false; } else { lblMessage.Visible = true; } } My problem now is that: when the user types in the textbox it doesn't cause OnTextChanged event. Am I missing something?

    Read the article

  • TransactionScope won't work with DB2 provider

    - by Florin
    Hi Everyone, I've been trying to use TransactionScope with a DB2 database (using DB2 .Net provider v 9.0.0.2 and c# 2.0) which SHOULD be supported according to IBM. I have tried all the advice i could find on the IBM forums (such as here) to no avail. I have enabled XA transactions on my XP Sp2 machine, tried also from a Win 2003 Server machine but i consistently get the infamous error: ERROR [58005] [IBM][DB2/NT] SQL0998N Error occurred during transaction or heuristic processing. Reason Code = "16". Subcode = "2-80004005". SQLSTATE=58005 The windows event log says: The XA Transaction Manager attempted to load the XA resource manager DLL. The call to LOADLIBRARY for the XA resource manager DLL failed: DLL=C:\APPS\IBM\DB2v95fp2\SQLLIB\BIN\DB2APP.DLL File=d:\comxp_sp2\com\com1x\dtc\dtc\xatm\src\xarmconn.cpp Line=2467. Also, granted the NETWORK SERVICE user full rights to the folder and dll. Here's the MSDTC startup message MS DTC started with the following settings: Security Configuration (OFF = 0 and ON = 1): Network Administration of Transactions = 0, Network Clients = 0, Inbound Distributed Transactions using Native MSDTC Protocol = 0, Outbound Distributed Transactions using Native MSDTC Protocol = 0, Transaction Internet Protocol (TIP) = 0, XA Transactions = 1 Any help would be much appreciated! Thanks, Florin

    Read the article

  • Android WebView - cannot understand - Null or empty value for header "if-none-match"

    - by ganesh
    Hi When i tried to load a url i get an exception as below Uncaught handler: thread WebViewCoreThread exiting due to uncaught exception 06-16 10:22:31.471: ERROR/AndroidRuntime(635): java.lang.RuntimeException: Null or empty value for header "if-none-match" 06-16 10:22:31.471: ERROR/AndroidRuntime(635): at android.net.http.Request.addHeader(Request.java:161) 06-16 10:22:31.471: ERROR/AndroidRuntime(635): at android.net.http.Request.addHeaders(Request.java:179) 06-16 10:22:31.471: ERROR/AndroidRuntime(635): at android.net.http.Request.<init>(Request.java:132) 06-16 10:22:31.471: ERROR/AndroidRuntime(635): at android.net.http.RequestQueue.queueRequest(RequestQueue.java:480) 06-16 10:22:31.471: ERROR/AndroidRuntime(635): at android.net.http.RequestHandle.createAndQueueNewRequest(RequestHandle.java:419) 06-16 10:22:31.471: ERROR/AndroidRuntime(635): at android.net.http.RequestHandle.setupRedirect(RequestHandle.java:195) 06-16 10:22:31.471: ERROR/AndroidRuntime(635): at android.webkit.LoadListener.doRedirect(LoadListener.java:1216) 06-16 10:22:31.471: ERROR/AndroidRuntime(635): at android.webkit.LoadListener.handleMessage(LoadListener.java:220) 06-16 10:22:31.471: ERROR/AndroidRuntime(635): at android.os.Handler.dispatchMessage(Handler.java:99) 06-16 10:22:31.471: ERROR/AndroidRuntime(635): at android.os.Looper.loop(Looper.java:123) 06-16 10:22:31.471: ERROR/AndroidRuntime(635): at android.webkit.WebViewCore$WebCoreThread.run(WebViewCore.java:471) 06-16 10:22:31.471: ERROR/AndroidRuntime(635): at java.lang.Thread.run(Thread.java:1060) the code i am using is webview = (WebView)findViewById(R.id.generalwebview); webview.getSettings().setJavaScriptEnabled(true); webview.setWebViewClient(new WebViewClient() { public boolean shouldOverrideUrlLoading(WebView view, String url) { Log.i("ReserveBooking", "Processing webview url click..."); view.loadUrl(url); return true; } public void onPageFinished(WebView view, String url) { Log.i("ReserveBooking", "Finished loading URL: " +url); } public void onReceivedError(WebView view, int errorCode, String description, String failingUrl) { Log.e("ReserveBooking", "Error: " + description); Toast.makeText(ReserveBooking.this, description, Toast.LENGTH_SHORT).show(); } }); webview.loadUrl(utls); and when i changed the emulator, this programs works without any error .Please help me to know the reason why i get this error ,is this error somehow related to cache? ,I shall be glad if someone explains ganesh

    Read the article

  • Intellij Javafx artifact - how do you make it?

    - by Louis Sayers
    I've been trying all day to turn my javafx application into a jar file. I'm using Java 1.7 update 7. Oracle has some information, but it just seems scattered all over the place. Intellij is almost doing the job, but I get the following error: java.lang.NoClassDefFoundError: javafx/application/Application Which seems to say that I need to tell java where the jfxrt.jar is... If I add this jar to my classpath info for the manifest build in intellij - ctrl+shift+alt+s - Artifacts - Output Layout tab - Class Path, then I get another error: Could not find or load main class com.downloadpinterest.code.Main It seems strange to have to include jfxrt.jar in my class path though... I've tried to create an ant script as well, but I feel like IntelliJ is 90% of the way there - and I just need a bit of help to figure out why I need to include jfxrt.jar, and why my Main class isn't being found (I'm guessing I need to add it to the classpath somehow?). Could anyone clue me up as to what is happening? I had a basic gui before which worked fine, but JavaFX seems to be making life complicated!

    Read the article

  • Activity Indicator display in Table View whilst row data is being fetched

    - by Tofrizer
    Hi All, I am navigating from tableview1.row to a tableview2 which has a LOT of rows being fetched. Given the load time is around 3 seconds, I want the navigation to slide into tableview2 as soon as the tableview1.row is selected, and then display a UIActivityIndicatorView above tableview2 whilst the data is fetched and then rendered in its underlying table view. Note, tableview2 is actually a subview of the parent UIView (as opposed to the parent being a UITableView). I've seen this post: http://stackoverflow.com/questions/2153653/activity-indicator-shold-be-displayed-when-navigating-from-uitableview1-to-uitabl ... which gives instructions to add the activity indicator start and stopAnimating calls around the data fetch into viewDidLoad of tableview2. Thing is, I'm not sure how the above solution could work as viewDidLoad runs and completes before tableview2 visibly slides into view. Separately, I also tried adding an activity indicator over tableview2 in IB and added the IBOutlet indicator's start/stop animating code into viewDidAppear. What happens is the data fetch runs and I can see the indicator spinning but at the end of the fetch, the table view is empty. Seems like viewDidAppear is too late to add data to the table view as cellForRowAtIndexPath etc has already fired. Can anyone please suggest any pointers? I could very well be missing something obvious here (its nearly 5am where I am and think my brain is mush). Should I re-trigger cellForRowAtIndexPath etc from viewDidAppear? Is the issue that my table view is a subview and not the parent view? Thanks

    Read the article

  • PHP template class with variables?

    - by Josh
    I want to make developing on my new projects easier, and I wanted a bare bones very simple template engine solution. I looked around on the net and everything is either too bloated, or makes me cringe. My HTML files will be like so: <html> <head> <title>{PAGE_TITLE}</title> </head> <body> <h1>{PAGE_HEADER}</h1> <p>Some random content that is likely not to be parsed with PHP.</p> </body> </html> Obviously, I want to replace {PAGE_TITLE} and {PAGE_HEADER} with something I set with PHP. Like this: <?php $pageElements = array( '{PAGE_TITLE}' => 'Some random title.', '{PAGE_HEADER}' => 'A page header!' ); ?> And I'd use something like str_replace and load the replaced HTML into a string, then print it to the page? This is what I'm on the path towards doing at the moment... does anyone have any advice or a way I can do this better? Thanks.

    Read the article

  • How to insert inline content from one FlowDocument into another?

    - by Robert Rossney
    I'm building an application that needs to allow a user to insert text from one RichTextBox at the current caret position in another one. I spent a lot of time screwing around with the FlowDocument's object model before running across this technique - source and target are both FlowDocuments: using (MemoryStream ms = new MemoryStream()) { TextRange tr = new TextRange(source.ContentStart, source.ContentEnd); tr.Save(ms, DataFormats.Xaml); ms.Seek(0, SeekOrigin.Begin); tr = new TextRange(target.CaretPosition, target.CaretPosition); tr.Load(ms, DataFormats.Xaml); } This works remarkably well. The only problem I'm having with it now is that it always inserts the source as a new paragraph. It breaks the current run (or whatever) at the caret, inserts the source, and ends the paragraph. That's appropriate if the source actually is a paragraph (or more than one paragraph), but not if it's just (say) a line of text. I think it's likely that the answer to this is going to end up being checking the target to see if it consists entirely of a single block, and if it does, setting the TextRange to point at the beginning and end of the block's content before saving it to the stream. The entire world of the FlowDocument is a roiling sea of dark mysteries to me. I can become an expert at it if I have to (per Dostoevsky: "Man is the animal who can get used to anything."), but if someone has already figured this out and can tell me how to do this it would make my life far easier.

    Read the article

  • WCF service errors after installing WindowsXP updates

    - by niao
    Greeting, today before I start working on my application I updated my WinXP. After all updates have been installed my WCF service stop working. There is a following error when I try to open service.svc file in the browser: Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: An error occurred creating the configuration section handler for system.serviceModel/bindings: Could not load type 'System.Security.Authentication.ExtendedProtection.Configuration.ExtendedProtectionPolicyElement' from assembly 'System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. Source Error: Line 131: </behaviors> Line 132: Line 133: <bindings> Line 134: <wsHttpBinding> Line 135: <binding name="MyWSHttpBinding" maxReceivedMessageSize="2147483647"> The colleague of my tried to run the same service before update and it works fine. He has the same problem after installing updates. Can someone please help me?

    Read the article

< Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >