Search Results

Search found 1081 results on 44 pages for 'combinations'.

Page 38/44 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Atheros Wireless card shows up as two different models?

    - by geermc4
    Hi I've been fighting these wireless drivers for a few days and just recently i noticed that the model the Wireless controller appears in lspci is different sometimes. This is the data i have after installing Ubuntu Server 64 bit ~# lspci -k .... 04:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) Subsystem: AzureWave Device 1d89 Kernel driver in use: ath9k Kernel modules: ath9k ran some updates, restarted, all was good, all though it did say that linux-headers-server linux-image-server linux-server where beeing kept back. After that i installed ubuntu-desktop (aptitude install ubuntu-desktop --without-recommends) restarted and not only is the wireless not working anymore, but the hardware is listed as a different card ~# lspci -k .... 04:00.0 Ethernet controller: Atheros Communications Inc. AR5008 Wireless Network Adapter (rev 01) has no available drivers for it, still i tried to modprobe ath9k, they show up in lsmod as loaded, but still iw list shows nothing. this is what it looked like before the ubuntu-desktop instalation Wiphy phy0 Band 1: Capabilities: 0x11ce HT20/HT40 SM Power Save disabled RX HT40 SGI TX STBC RX STBC 1-stream Max AMSDU length: 3839 bytes DSSS/CCK HT40 Maximum RX AMPDU length 65535 bytes (exponent: 0x003) Minimum RX AMPDU time spacing: 8 usec (0x06) HT TX/RX MCS rate indexes supported: 0-7 Frequencies: * 2412 MHz [1] (14.0 dBm) * 2417 MHz [2] (15.0 dBm) * 2422 MHz [3] (15.0 dBm) * 2427 MHz [4] (15.0 dBm) * 2432 MHz [5] (15.0 dBm) * 2437 MHz [6] (15.0 dBm) * 2442 MHz [7] (15.0 dBm) * 2447 MHz [8] (15.0 dBm) * 2452 MHz [9] (15.0 dBm) * 2457 MHz [10] (15.0 dBm) * 2462 MHz [11] (15.0 dBm) * 2467 MHz [12] (15.0 dBm) (passive scanning) * 2472 MHz [13] (14.0 dBm) (passive scanning) * 2484 MHz [14] (17.0 dBm) (passive scanning) Bitrates (non-HT): * 1.0 Mbps * 2.0 Mbps (short preamble supported) * 5.5 Mbps (short preamble supported) * 11.0 Mbps (short preamble supported) * 6.0 Mbps * 9.0 Mbps * 12.0 Mbps * 18.0 Mbps * 24.0 Mbps * 36.0 Mbps * 48.0 Mbps * 54.0 Mbps max # scan SSIDs: 4 max scan IEs length: 2257 bytes Coverage class: 0 (up to 0m) Supported Ciphers: * WEP40 (00-0f-ac:1) * WEP104 (00-0f-ac:5) * TKIP (00-0f-ac:2) * CCMP (00-0f-ac:4) * CMAC (00-0f-ac:6) Available Antennas: TX 0x1 RX 0x3 Configured Antennas: TX 0x1 RX 0x3 Supported interface modes: * IBSS * managed * AP * AP/VLAN * WDS * monitor * mesh point * P2P-client * P2P-GO software interface modes (can always be added): * AP/VLAN * monitor interface combinations are not supported Supported commands: * new_interface * set_interface * new_key * new_beacon * new_station * new_mpath * set_mesh_params * set_bss * authenticate * associate * deauthenticate * disassociate * join_ibss * join_mesh * remain_on_channel * set_tx_bitrate_mask * action * frame_wait_cancel * set_wiphy_netns * set_channel * set_wds_peer * connect * disconnect Supported TX frame types: * IBSS: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * managed: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * AP: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * AP/VLAN: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * mesh point: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * P2P-client: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * P2P-GO: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 Supported RX frame types: * IBSS: 0x00d0 * managed: 0x0040 0x00d0 * AP: 0x0000 0x0020 0x0040 0x00a0 0x00b0 0x00c0 0x00d0 * AP/VLAN: 0x0000 0x0020 0x0040 0x00a0 0x00b0 0x00c0 0x00d0 * mesh point: 0x00b0 0x00c0 0x00d0 * P2P-client: 0x0040 0x00d0 * P2P-GO: 0x0000 0x0020 0x0040 0x00a0 0x00b0 0x00c0 0x00d0 Device supports RSN-IBSS. What's with the hardware change? If it has 2, how can i make the AR9285 always load and disable AR5008, or, is it the same and it's just showing it different? :| Oh and I've tried this on Ubuntu 10.04 server, xubuntu 12.04, ubuntu 12.04 desktop and server. Thanks in advanced. -- Here's some more info, i have it setup in 2 hard drives, 1 works and the other one i'm using to figure it out The one that works... # lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 06 serial: 54:04:a6:a3:3b:96 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw ip=192.168.2.147 latency=0 link=yes multicast=yes port=MII speed=1Gbit/s resources: irq:43 ioport:e000(size=256) memory:d0004000-d0004fff memory:d0000000-d0003fff *-network description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 logical name: wlan0 version: 01 serial: 74:2f:68:4a:26:73 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-18-generic-pae firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:18 memory:fea00000-fea0ffff Here's where it doesn't # lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 06 serial: 54:04:a6:a3:3b:96 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw ip=192.168.2.160 latency=0 link=yes multicast=yes port=MII speed=1Gbit/s resources: irq:43 ioport:e000(size=256) memory:d0004000-d0004fff memory:d0000000-d0003fff *-network UNCLAIMED description: Ethernet controller product: AR5008 Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:fea00000-fea0ffff Update I've noticed that if i blacklist the ath9k and ath9k_common modules lspci gives me the AR9285, but then I need to modprobe ath9k for it to work, does this make any sense? If so, why?

    Read the article

  • Subterranean IL: The ThreadLocal type

    - by Simon Cooper
    I came across ThreadLocal<T> while I was researching ConcurrentBag. To look at it, it doesn't really make much sense. What's all those extra Cn classes doing in there? Why is there a GenericHolder<T,U,V,W> class? What's going on? However, digging deeper, it's a rather ingenious solution to a tricky problem. Thread statics Declaring that a variable is thread static, that is, values assigned and read from the field is specific to the thread doing the reading, is quite easy in .NET: [ThreadStatic] private static string s_ThreadStaticField; ThreadStaticAttribute is not a pseudo-custom attribute; it is compiled as a normal attribute, but the CLR has in-built magic, activated by that attribute, to redirect accesses to the field based on the executing thread's identity. TheadStaticAttribute provides a simple solution when you want to use a single field as thread-static. What if you want to create an arbitary number of thread static variables at runtime? Thread-static fields can only be declared, and are fixed, at compile time. Prior to .NET 4, you only had one solution - thread local data slots. This is a lesser-known function of Thread that has existed since .NET 1.1: LocalDataStoreSlot threadSlot = Thread.AllocateNamedDataSlot("slot1"); string value = "foo"; Thread.SetData(threadSlot, value); string gettedValue = (string)Thread.GetData(threadSlot); Each instance of LocalStoreDataSlot mediates access to a single slot, and each slot acts like a separate thread-static field. As you can see, using thread data slots is quite cumbersome. You need to keep track of LocalDataStoreSlot objects, it's not obvious how instances of LocalDataStoreSlot correspond to individual thread-static variables, and it's not type safe. It's also relatively slow and complicated; the internal implementation consists of a whole series of classes hanging off a single thread-static field in Thread itself, using various arrays, lists, and locks for synchronization. ThreadLocal<T> is far simpler and easier to use. ThreadLocal ThreadLocal provides an abstraction around thread-static fields that allows it to be used just like any other class; it can be used as a replacement for a thread-static field, it can be used in a List<ThreadLocal<T>>, you can create as many as you need at runtime. So what does it do? It can't just have an instance-specific thread-static field, because thread-static fields have to be declared as static, and so shared between all instances of the declaring type. There's something else going on here. The values stored in instances of ThreadLocal<T> are stored in instantiations of the GenericHolder<T,U,V,W> class, which contains a single ThreadStatic field (s_value) to store the actual value. This class is then instantiated with various combinations of the Cn types for generic arguments. In .NET, each separate instantiation of a generic type has its own static state. For example, GenericHolder<int,C0,C1,C2> has a completely separate s_value field to GenericHolder<int,C1,C14,C1>. This feature is (ab)used by ThreadLocal to emulate instance thread-static fields. Every time an instance of ThreadLocal is constructed, it is assigned a unique number from the static s_currentTypeId field using Interlocked.Increment, in the FindNextTypeIndex method. The hexadecimal representation of that number then defines the specific Cn types that instantiates the GenericHolder class. That instantiation is therefore 'owned' by that instance of ThreadLocal. This gives each instance of ThreadLocal its own ThreadStatic field through a specific unique instantiation of the GenericHolder class. Although GenericHolder has four type variables, the first one is always instantiated to the type stored in the ThreadLocal<T>. This gives three free type variables, each of which can be instantiated to one of 16 types (C0 to C15). This puts an upper limit of 4096 (163) on the number of ThreadLocal<T> instances that can be created for each value of T. That is, there can be a maximum of 4096 instances of ThreadLocal<string>, and separately a maximum of 4096 instances of ThreadLocal<object>, etc. However, there is an upper limit of 16384 enforced on the total number of ThreadLocal instances in the AppDomain. This is to stop too much memory being used by thousands of instantiations of GenericHolder<T,U,V,W>, as once a type is loaded into an AppDomain it cannot be unloaded, and will continue to sit there taking up memory until the AppDomain is unloaded. The total number of ThreadLocal instances created is tracked by the ThreadLocalGlobalCounter class. So what happens when either limit is reached? Firstly, to try and stop this limit being reached, it recycles GenericHolder type indexes of ThreadLocal instances that get disposed using the s_availableIndices concurrent stack. This allows GenericHolder instantiations of disposed ThreadLocal instances to be re-used. But if there aren't any available instantiations, then ThreadLocal falls back on a standard thread local slot using TLSHolder. This makes it very important to dispose of your ThreadLocal instances if you'll be using lots of them, so the type instantiations can be recycled. The previous way of creating arbitary thread-static variables, thread data slots, was slow, clunky, and hard to use. In comparison, ThreadLocal can be used just like any other type, and each instance appears from the outside to be a non-static thread-static variable. It does this by using the CLR type system to assign each instance of ThreadLocal its own instantiated type containing a thread-static field, and so delegating a lot of the bookkeeping that thread data slots had to do to the CLR type system itself! That's a very clever use of the CLR type system.

    Read the article

  • Mixing Forms and Token Authentication in a single ASP.NET Application (the Details)

    - by Your DisplayName here!
    The scenario described in my last post works because of the design around HTTP modules in ASP.NET. Authentication related modules (like Forms authentication and WIF WS-Fed/Sessions) typically subscribe to three events in the pipeline – AuthenticateRequest/PostAuthenticateRequest for pre-processing and EndRequest for post-processing (like making redirects to a login page). In the pre-processing stage it is the modules’ job to determine the identity of the client based on incoming HTTP details (like a header, cookie, form post) and set HttpContext.User and Thread.CurrentPrincipal. The actual page (in the ExecuteHandler event) “sees” the identity that the last module has set. So in our case there are three modules in effect: FormsAuthenticationModule (AuthenticateRequest, EndRequest) WSFederationAuthenticationModule (AuthenticateRequest, PostAuthenticateRequest, EndRequest) SessionAuthenticationModule (AuthenticateRequest, PostAuthenticateRequest) So let’s have a look at the different scenario we have when mixing Forms auth and WS-Federation. Anoymous request to unprotected resource This is the easiest case. Since there is no WIF session cookie or a FormsAuth cookie, these modules do nothing. The WSFed module creates an anonymous ClaimsPrincipal and calls the registered ClaimsAuthenticationManager (if any) to transform it. The result (by default an anonymous ClaimsPrincipal) gets set. Anonymous request to FormsAuth protected resource This is the scenario where an anonymous user tries to access a FormsAuth protected resource for the first time. The principal is anonymous and before the page gets rendered, the Authorize attribute kicks in. The attribute determines that the user needs authentication and therefor sets a 401 status code and ends the request. Now execution jumps to the EndRequest event, where the FormsAuth module takes over. The module then converts the 401 to a redirect (302) to the forms login page. If authentication is successful, the login page sets the FormsAuth cookie.   FormsAuth authenticated request to a FormsAuth protected resource Now a FormsAuth cookie is present, which gets validated by the FormsAuth module. This cookie gets turned into a GenericPrincipal/FormsIdentity combination. The WS-Fed module turns the principal into a ClaimsPrincipal and calls the registered ClaimsAuthenticationManager. The outcome of that gets set on the context. Anonymous request to STS protected resource This time the anonymous user tries to access an STS protected resource (a controller decorated with the RequireTokenAuthentication attribute). The attribute determines that the user needs STS authentication by checking the authentication type on the current principal. If this is not Federation, the redirect to the STS will be made. After successful authentication at the STS, the STS posts the token back to the application (using WS-Federation syntax). Postback from STS authentication After the postback, the WS-Fed module finds the token response and validates the contained token. If successful, the token gets transformed by the ClaimsAuthenticationManager, and the outcome is a) stored in a session cookie, and b) set on the context. STS authenticated request to an STS protected resource This time the WIF Session authentication module kicks in because it can find the previously issued session cookie. The module re-hydrates the ClaimsPrincipal from the cookie and sets it.     FormsAuth and STS authenticated request to a protected resource This is kind of an odd case – e.g. the user first authenticated using Forms and after that using the STS. This time the FormsAuth module does its work, and then afterwards the session module stomps over the context with the session principal. In other words, the STS identity wins.   What about roles? A common way to set roles in ASP.NET is to use the role manager feature. There is a corresponding HTTP module for that (RoleManagerModule) that handles PostAuthenticateRequest. Does this collide with the above combinations? No it doesn’t! When the WS-Fed module turns existing principals into a ClaimsPrincipal (like it did with the FormsIdentity), it also checks for RolePrincipal (which is the principal type created by role manager), and turns the roles in role claims. Nice! But as you can see in the last scenario above, this might result in unnecessary work, so I would rather recommend consolidating all role work (and other claims transformations) into the ClaimsAuthenticationManager. In there you can check for the authentication type of the incoming principal and act accordingly. HTH

    Read the article

  • Routing audio to Bluetooth Headset (non-A2DP) on Android

    - by Jayesh
    I have a non-A2DP single ear BT headset (Plantronics 510) and would like to use it with my Android HTC Magic to listen to low quality audio like podcasts/audio books. After much googling I found that only phone call audio can be routed to the non-A2DP BT headsets. (I would like to know if you have found a ready solution to route all kinds of audio to non-A2DP BT headsets) So I figured, somehow programmatically I can channel the audio to the stream that carries phone call audio. This way I will fool the phone to carry my mp3 audio to my BT headset. I wrote following simple code. import android.content.*; import android.app.Activity; import android.os.Bundle; import android.media.*; import java.io.*; import android.util.Log; public class BTAudioActivity extends Activity { private static final String TAG = "BTAudioActivity"; private MediaPlayer mPlayer = null; private AudioManager amanager = null; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); amanager = (AudioManager) getSystemService(Context.AUDIO_SERVICE); amanager.setBluetoothScoOn(true); amanager.setMode(AudioManager.MODE_IN_CALL); mPlayer = new MediaPlayer(); try { mPlayer.setDataSource(new FileInputStream( "/sdcard/sample.mp3").getFD()); mPlayer.setAudioStreamType(AudioManager.STREAM_VOICE_CALL); mPlayer.prepare(); mPlayer.start(); } catch(Exception e) { Log.e(TAG, e.toString()); } } @Override public void onDestroy() { mPlayer.stop(); amanager.setMode(AudioManager.MODE_NORMAL); amanager.setBluetoothScoOn(false); super.onDestroy(); } } As you can see I tried combinations of various methods that I thought will fool the phone to believe my audio is a phone call: Using MediaPlayer's setAudioStreamType(STREAM_VOICE_CALL) using AudioManager's setBluetoothScoOn(true) using AudioManager's setMode(MODE_IN_CALL) But none of the above worked. If I remove the AudioManager calls in the above code, the audio plays from speaker and if I replace them as shown above then the audio stops coming from speakers, but it doesn't come through the BT headset. So this might be a partial success. I have checked that the BT headset works alright with phone calls. There must be a reason for Android not supporting this. But I can't let go of the feeling that it is not possible to programmatically reroute the audio. Any ideas? P.S. above code needs following permission <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS"/>

    Read the article

  • Validating an XML document fragment against XML schema

    - by shylent
    Terribly sorry if I've failed to find a duplicate of this question. I have a certain document with a well-defined document structure. I am expressing that structure through an XML schema. That data structure is operated upon by a RESTful service, so various nodes and combinations of nodes (not the whole document, but fragments of it) are exposed as "resources". Naturally, I am doing my own validation of the actual data, but it makes sense to validate the incoming/outgoing data against the schema as well (before the fine-grained validation of the data). What I don't quite grasp is how to validate document fragments given the schema definition. Let me illustrate: Imagine, the example document structure is: <doc-root> <el name="foo"/> <el name="bar"/> </doc-root> Rather a trivial data structure. The schema goes something like this: <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <xsd:element name="doc-root"> <xsd:complexType> <xsd:sequence> <xsd:element name="el" type="myCustomType" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:complexType name="myCustomType"> <xsd:attribute name="name" use="required" /> </xsd:complexType> </xsd:schema> Now, imagine, I've just received a PUT request to update an 'el' object. Naturally, I would receive not the full document or not any xml, starting with 'doc-root' at its root, but the 'el' element itself. I would very much like to validate it against the existing schema then, but just running it through a validating parser wouldn't work, since it will expect a 'doc-root' at the root. So, again, the question is, - how can one validate a document fragment against an existing schema, or, perhaps, how can a schema be written to allow such an approach. Hope it made sense.

    Read the article

  • Progress gauge in status bar, using Cody Precord's ProgressStatusBar

    - by MCXXIII
    Hi. I am attempting to create a progress gauge in the status bar for my application, and I'm using the example in Cody Precord's wxPython 2.8 Application Development Cookbook. I've reproduced it below. For now I simply wish to show the gauge and have it pulse when the application is busy, so I assume I need to use the Start/StopBusy() methods. Problem is, none of it seems to work, and the book doesn't provide an example of how to use the class. In the __init__ of my frame I create my status bar like so: self.statbar = status.ProgressStatusBar( self ) self.SetStatusBar( self.statbar ) Then, in the function which does all the work, I have tried things like: self.GetStatusBar().SetRange( 100 ) self.GetStatusBar().SetProgress( 0 ) self.GetStatusBar().StartBusy() self.GetStatusBar().Run() # work done here self.GetStatusBar().StopBusy() And several combinations and permutations of those commands, but nothing happens, no gauge is ever shown. The work takes several seconds, so it's not because the gauge simply disappears again too quickly for me to notice. I can get the gauge to show up by removing the self.prog.Hide() line from Precord's __init__ but it still doesn't pulse and simply disappears never to return once work has finished the first time. Here's Precord's class: class ProgressStatusBar( wx.StatusBar ): '''Custom StatusBar with a built-in progress bar''' def __init__( self, parent, id_=wx.ID_ANY, style=wx.SB_FLAT, name='ProgressStatusBar' ): super( ProgressStatusBar, self ).__init__( parent, id_, style, name ) self._changed = False self.busy = False self.timer = wx.Timer( self ) self.prog = wx.Gauge( self, style=wx.GA_HORIZONTAL ) self.prog.Hide() self.SetFieldsCount( 2 ) self.SetStatusWidths( [-1, 155] ) self.Bind( wx.EVT_IDLE, lambda evt: self.__Reposition() ) self.Bind( wx.EVT_TIMER, self.OnTimer ) self.Bind( wx.EVT_SIZE, self.OnSize ) def __del__( self ): if self.timer.IsRunning(): self.timer.Stop() def __Reposition( self ): '''Repositions the gauge as necessary''' if self._changed: lfield = self.GetFieldsCount() - 1 rect = self.GetFieldRect( lfield ) prog_pos = (rect.x + 2, rect.y + 2) self.prog.SetPosition( prog_pos ) prog_size = (rect.width - 8, rect.height - 4) self.prog.SetSize( prog_size ) self._changed = False def OnSize( self, evt ): self._changed = True self.__Reposition() evt.Skip() def OnTimer( self, evt ): if not self.prog.IsShown(): self.timer.Stop() if self.busy: self.prog.Pulse() def Run( self, rate=100 ): if not self.timer.IsRunning(): self.timer.Start( rate ) def GetProgress( self ): return self.prog.GetValue() def SetProgress( self, val ): if not self.prog.IsShown(): self.ShowProgress( True ) if val == self.prog.GetRange(): self.prog.SetValue( 0 ) self.ShowProgress( False ) else: self.prog.SetValue( val ) def SetRange( self, val ): if val != self.prog.GetRange(): self.prog.SetRange( val ) def ShowProgress( self, show=True ): self.__Reposition() self.prog.Show( show ) def StartBusy( self, rate=100 ): self.busy = True self.__Reposition() self.ShowProgress( True ) if not self.timer.IsRunning(): self.timer.Start( rate ) def StopBusy( self ): self.timer.Stop() self.ShowProgress( False ) self.prog.SetValue( 0 ) self.busy = False def IsBusy( self ): return self.busy

    Read the article

  • How can I get an Android TableLayout to fill the screen?

    - by Timmmm
    Hi, I'm battling with Android's retarded layout system. I'm trying to get a table to fill the screen (simple right?) but it's ridiculously hard. I got it to work somehow in XML like this: <?xml version="1.0" encoding="utf-8"?> <TableLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_height="fill_parent" android:layout_width="fill_parent"> <TableRow android:layout_height="fill_parent" android:layout_width="fill_parent" android:layout_weight="1"> <Button android:text="A" android:layout_width="wrap_content" android:layout_height="fill_parent" android:layout_weight="1"/> <Button android:text="B" android:layout_width="wrap_content" android:layout_height="fill_parent" android:layout_weight="1"/> </TableRow> <TableRow android:layout_height="fill_parent" android:layout_width="fill_parent" android:layout_weight="1"> <Button android:text="C" android:layout_width="wrap_content" android:layout_height="fill_parent" android:layout_weight="1"/> <Button android:text="D" android:layout_width="wrap_content" android:layout_height="fill_parent" android:layout_weight="1"/> </TableRow> However I can not get it to work in Java. I've tried a million combinations of the LayoutParams, but nothing ever works. This is the best result I have which only fills the width of the screen, not the height: table = new TableLayout(this); // Java. You suck. TableLayout.LayoutParams lp = new TableLayout.LayoutParams( ViewGroup.LayoutParams.FILL_PARENT, ViewGroup.LayoutParams.FILL_PARENT); table.setLayoutParams(lp); // This line has no effect! WHYYYY?! table.setStretchAllColumns(true); for (int r = 0; r < 2; ++r) { TableRow row = new TableRow(this); for (int c = 0; c < 2; ++c) { Button btn = new Button(this); btn.setText("A"); row.addView(btn); } table.addView(row); } Obviously the Android documentation is no help. Anyone have any ideas?

    Read the article

  • SSLException: Keystore does not support enabled cipher suites

    - by wurfkeks
    I want to implement a small android application, that works as SSL Server. After lot of problems with the right format of the keystore, I solved this and run into the next one. My keystore file is properly loaded by the KeyStore class. But when I try to open the server socket (socket.accept()) the following error is raised: javax.net.ssl.SSLException: Could not find any key store entries to support the enabled cipher suites. I generated my keystore with this command: keytool -genkey -keystore test.keystore -keyalg RSA -keypass ssltest -storepass ssltest -storetype BKS -provider org.bouncycastle.jce.provider.BouncyCastleProvider -providerpath bcprov.jar with the Unlimited Strength Jurisdiction Policy for Java SE6 applied to my jre6. I got a list of supported ciphers suites by calling socket.getSupportedCipherSuites() that prints a long list with very different combinations. But I don't know how to get a supported key. I also tried the android debug keystore after converting it to BKS format using portecle but get still the same error. Can anyone help and tell how I can generate a key that is compatible with one of the cipher suites? Version Information: targetSDK: 15 tested on emulator running 4.0.3 and real device running 2.3.3 BounceCastle 1.46 portecle 1.7 Code of my test application: public class SSLTestActivity extends Activity implements Runnable { SSLServerSocket mServerSocket; ToggleButton tglBtn; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); this.tglBtn = (ToggleButton)findViewById(R.id.toggleButton1); tglBtn.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() { @Override public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) { if (isChecked) { new Thread(SSLTestActivity.this).run(); } else { try { if (mServerSocket != null) mServerSocket.close(); } catch (IOException e) { Log.e("SSLTestActivity", e.toString()); } } } }); } @Override public void run() { try { KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType()); keyStore.load(getAssets().open("test.keystore"), "ssltest".toCharArray()); ServerSocketFactory socketFactory = SSLServerSocketFactory.getDefault(); mServerSocket = (SSLServerSocket) socketFactory.createServerSocket(8080); while (!mServerSocket.isClosed()) { Socket client = mServerSocket.accept(); PrintWriter output = new PrintWriter(client.getOutputStream(), true); output.println("So long, and thanks for all the fish!"); client.close(); } } catch (Exception e) { Log.e("SSLTestActivity", e.toString()); } } }

    Read the article

  • Vs2010 MvcBuildViews Not firing

    - by Maslow
    This project in Vs2008 targeting .net 3.5 used to compile views. Vs2010 Targeting .net 4.0 the following view code is not picked up as an error, and I have not found anyway to listen to the mvcBuildview trace/debug output: <%{ %> A completely unmatched code block declaration is not being picked up, neither was a partial view inheriting from a non existent namespace/class. <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'DebugWithBuildViews|AnyCPU' "> <!--<BaseIntermediateOutputPath>bin/intermediate</BaseIntermediateOutputPath>--> <!--<MvcBuildViews Condition=" '$(Configuration)' == 'DebugWithBuildViews' ">true</MvcBuildViews>--> <EnableUpdateable>false</EnableUpdateable> <MvcBuildViews>true</MvcBuildViews> <DebugSymbols>true</DebugSymbols> <OutputPath>bin</OutputPath> <DefineConstants>DEBUG;TRACE</DefineConstants> <DebugType>full</DebugType> <PlatformTarget>AnyCPU</PlatformTarget> <CodeAnalysisUseTypeNameInSuppression>true</CodeAnalysisUseTypeNameInSuppression> <CodeAnalysisModuleSuppressionsFile>GlobalSuppressions.cs</CodeAnalysisModuleSuppressionsFile> <ErrorReport>prompt</ErrorReport> <CodeAnalysisRuleSet>AllRules.ruleset</CodeAnalysisRuleSet> <RunCodeAnalysis>true</RunCodeAnalysis> </PropertyGroup> My BeforeBuild: <Target Name="BeforeBuild"> <WriteLinesToFile File="$(OutputPath)\env.config" Lines="$(Configuration)" Overwrite="true"> </WriteLinesToFile> My AfterBuild: <Target Name="AfterBuild" Condition="'$(MvcBuildViews)'=='true'"> <!--<BaseIntermediateOutputPath>[SomeKnownLocationIHaveAccessTo]</BaseIntermediateOutputPath>--> <Message Importance="high" Text="Precompiling views" /> <!--<AspNetCompiler VirtualPath="temp" PhysicalPath="$(ProjectDir)..\$(ProjectName)" />--> <!--<AspNetCompiler VirtualPath="temp" />--> <!--PhysicalPath="$(ProjectDir)\..\$(ProjectName)"--> I know the MvcBuildViews property is true because the Precompiling views message comes through. The compile is a success but it does not catch the view compilation errors. I have Vs2010 ultimate, vs 2008 developer+database edition on this machine. So either it compiles ignoring the errors with some combinations of the fixes I've tried, or it errors with Error 410 It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. web.config 100 The commented out sections are things I have tried Previously I have tried the fixes from these posts: Compile Views in Asp.net Mvc AllowDefinitionMachinetoApplicationError MvcBuildviews Issue Turning on MVC Build Views in 2010 TFS Johnny Coder

    Read the article

  • Trying to convert openGL to MFC coordinates and having Problems with "gluProject"

    - by Erez
    Hi, i'm trying to find the naswer on the web and can't find a full solution that i can use and that will work... I'm developing a MFC project with static picture as the canvas for an openGL class that draw the grphics for my game. On moush down, i need to retrive a shape coordinate from the openGL class. I'm looking for a way to convert the openGL coordinates to MFC coordinates but no matter what i try i get junk after using the gluProject or gluUnProject (i've tried to do both ways but non is working) GLdouble modelMatrix[16]; glGetDoublev(GL_MODELVIEW_MATRIX,modelMatrix); GLdouble projMatrix[16]; glGetDoublev(GL_PROJECTION_MATRIX,projMatrix); int viewport[4]; glGetIntegerv(GL_VIEWPORT,viewport); POINT mouse; // Stores The X And Y Coords For The Current Mouse Position GetCursorPos(&mouse); // Gets The Current Cursor Coordinates (Mouse Coordinates) ScreenToClient(hWnd, &mouse); GLdouble winX, winY, winZ; // Holds Our X, Y and Z Coordinates winX; = (float)point.x; // Holds The Mouse X Coordinate winY; = (float)point.y; // Holds The Mouse Y Coordinate winY = (float)viewport[3] - winY; glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ); GLdouble posX=s1->getPosX(), posY=s1->getPosY(), posZ=s1->getPosZ(); // Hold The Final Values gluUnProject( winX, winY, winZ, modelMatrix, projMatrix, viewport, &posX, &posY, &posZ); gluProject(posX, posY, posZ, modelMatrix, projMatrix, viewport, &winX, &winY, &winZ); This is just part of the code i've tried. ofcourse not gluProject and gluUnProject together. just had them both here to show.....and i know there is lots of junk over there, its from some of my tries... p.s. i've tried many many more combinations and examples from the web and nothing seem to work in my case.... Can any one show me what is the right way to do the transformation? 10x

    Read the article

  • Node.js + Express.js. How to RENDER less css?

    - by Paden
    Hello all, I am unable to render less css in my express workspace. Here is my current configuration (my css/less files go in 'public/stylo/'): app.configure(function() { app.set('views' , __dirname + '/views' ); app.set('partials' , __dirname + '/views/partials'); app.set('view engine', 'jade' ); app.use(express.bodyDecoder() ); app.use(express.methodOverride()); app.use(express.compiler({ src: __dirname + '/public/stylo', enable: ['less']})); app.use(app.router); app.use(express.staticProvider(__dirname + '/public')); }); Here is my main.jade file: !!! html(lang="en") head title Yea a title link(rel="stylesheet", type="text/css", href="/stylo/main.less") link(rel="stylesheet", href="http://fonts.googleapis.com/cssfamily=Droid+Sans|Droid+Sans+Mono|Ubuntu|Droid+Serif") script(src="https://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js") script(src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.7/jquery-ui.min.js") body!= body here is my main.less css: @import "goodies.css"; body { .googleFont; background-color : #000000; padding : 20px; margin : 0px; > .header { border-bottom : 1px solid #BBB; background-color : #f0f0f0; margin : -25px -25px 30px -25px; /* important */ color : #333; padding : 15px; font-size : 18pt; } } AND here is my goodies.less css: .rounded_corners(@radius: 10px) { -moz-border-radius : @radius; -webkit-border-radius: @radius; border-radius : @radius; } .shadows(@rad1: 0px, @rad2: 1px, @rad3: 3px, @color: #999) { -webkit-box-shadow : @rad1 @rad2 @rad3 @color; -moz-box-shadow : @rad1 @rad2 @rad3 @color; box-shadow : @rad1 @rad2 @rad3 @color; } .gradient (@type: linear, @pos1: left top, @pos2: left bottom, @color1: #f5f5f5, @color2: #ececec) { background-image : -webkit-gradient(@type, @pos1, @pos2, from(@color1), to(@color2)); background-image : -moz-linear-gradient(@color1, @color2); } .googleFont { font-family : 'Droid Serif'; } Cool deal. Now: I have installed less via npm and I had heard from another post that @imports should reference the .css not the .less. In any case, I have tried the combinations of switching .less for .css in the jade and less files with no success. If you can help or have the solution I'd greatly appreciate it. Note: The jade portion works fine if I enter any ol' .css. Note2: The less compiles if I use lessc via command line.

    Read the article

  • Event taps: Varying results with CGEventPost, kCGSessionEventTap, kCGAnnotatedSessionEventTap, CGEve

    - by kevingessner
    I'm running into a thorny problem with posting an event from an event tap. I'm tapping for NSSystemDefined at kCGHIDEventTap, then replacing the event with a new one. The problem I'm running in to is that depending on how I post the event, it's being seen only by some applications. My test applications are Opera, Firefox, Quicksilver, and Xcode. Here are the different techniques I've tried within my event tap callback, with results. I'm expecting an action (the "correct response") from each app; "system beep" means the nothing-is-bound-to-that-key system sound. Create a new event, and return it from the callback. Opera: no response/system beep, Firefox: no response/system beep, Quicksilver: correct response, Xcode: no response/system beep Create a new event, post to kCGSessionEventTap with CGEventPost, return null. Opera: no response/system beep, Firefox: no response/system beep, Quicksilver: correct response, Xcode: no response/system beep Create a new event, post to kCGAnnotatedSessionEventTap with CGEventPost, return null. Opera: correct response, Firefox: correct response, Quicksilver: no response/system beep, Xcode: no response/system beep Create a new event, post with CGEventTapPostEvent, return null. Opera: no response/system beep, Firefox: no response/system beep, Quicksilver: correct response, Xcode: no response/system beep Create a new event, post to kCGSessionEventTap with CGEventPost, and return new event. Opera: no response/system beep, Firefox: no response/system beep, Quicksilver: correct response, Xcode: no response/system beep Create a new event, post to kCGAnnotatedSessionEventTap with CGEventPost, and return new event. Opera: correct response and system beep, Firefox: correct response and system beep, Quicksilver: correct response and system beep, Xcode: no response/double system beep Create a new event, post with CGEventTapPostEvent, and return new event. Opera: no response/system beep, Firefox: no response/system beep, Quicksilver: correct response, Xcode: no response/system beep (6) is the best, but users are complaining about the extra system beep on correct responses, which I'm guessing is coming from the double-posting of the event. I'm not sure of other combinations to try, or where else to look. Can anyone offer any guidance? Is there any way to get the results of both returning the event from my callback and posting to the annotated tap without doing both? Sorry for the lengthy question; I've been doing a lot of experimenting. Thanks in advance Update: this is the code I use to create the event tap: CFMachPortRef eventTap; eventTap = CGEventTapCreate(kCGHIDEventTap, kCGHeadInsertEventTap, 0,CGEventMaskBit(NX_SYSDEFINED) | (1 << kCGEventKeyDown) | (1 << kCGEventKeyUp), myCGEventCallback, (void *)hidEventQueue);

    Read the article

  • Autoconf (newbie) -- building with static library

    - by EB
    I am trying to migrate from manual build to autoconf, which is working very nicely so far. But I have one static library that I can't figure out how to integrate. That library will NOT be located in the usual library locations - the location of the binary (.a file) and header (.h file) will be given as a configure argument. (Notably, even if I move the .a file to /usr/lib or anywhere else I can think of, it still won't work.) Manual compilation is working with these: gcc ... -I/path/to/header/file/directory /full/path/to/the/.a/file/itself (Uh, I actually don't understand why the .a file is referenced directly, not with -L or anything. Yes, I have a half-baked understanding of building C programs.) I can use the configure argument to successfully find the header (.h file) using AC_CHECK_HEADER. Inside the AC_CHECK_HEADER I then add the location to CPFLAGS and the #include of the header file in the actual C code picks it up nicely. Given a configure argument that has been put into $location and the name of the needed files are myprog.h and myprog.a (which are both in the same directory), here is what works so far: AC_CHECK_HEADER([$location/myprog.h], [AC_DEFINE([HAVE_MYPROG_H], [1], [found myprog.h]) CFLAGS="$CFLAGS -I$location"]) Where I run into difficulties is getting the binary (.a file) linked in. No matter what I try, I always get an error about undefined references to the function calls for that library. I'm pretty sure it's a linkage issue, because I can fuss with the C code and make an intentional error in the function calls to that library which produces earlier errors that indicate that the function prototypes have been loaded and used to compile. I tried adding the location that contains the .a file to LDFLAGS and then doing a AC_CHECK_LIB but it is not found. Maybe my syntax is wrong, or maybe I'm missing something more fundamental, which would not be surprising since I'm a newbie and don't really know what I'm doing. Here is what I have tried: AC_CHECK_HEADER([$location/myprog.h], [AC_DEFINE([HAVE_MYPROG_H], [1], [found myprog.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location"; AC_CHECK_LIB(myprog)]) No dice. AC_CHECK_LIB is looking for -lmyprog I guess (or libmyprog?) so I'm not sure if that's a problem, so I tried this, too (omit AC_CHECK_LIB and include the .a directly in LDFLAGS), without luck: AC_CHECK_HEADER([$location/myprog.h], [AC_DEFINE([HAVE_MYPROG_H], [1], [found myprog.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location/myprog.a"]) To emulate the manual compilation, I tried removing the -L but that doesn't help: AC_CHECK_HEADER([$location/myprog.h], [AC_DEFINE([HAVE_MYPROG_H], [1], [found myprog.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS $location/myprog.a"]) I tried other combinations and permutations, but I think I might be missing something more fundamental....

    Read the article

  • How to iterate over all the page breaks in an Excel 2003 worksheet via COM

    - by Martin
    I've been trying to retrieve the locations of all the page breaks on a given Excel 2003 worksheet over COM. Here's an example of the kind of thing I'm trying to do: Excel::HPageBreaksPtr pHPageBreaks = pSheet->GetHPageBreaks(); long count = pHPageBreaks->Count; for (long i=0; i < count; ++i) { Excel::HPageBreakPtr pHPageBreak = pHPageBreaks->GetItem(i+1); Excel::RangePtr pLocation = pHPageBreak->GetLocation(); printf("Page break at row %d\n", pLocation->Row); pLocation.Release(); pHPageBreak.Release(); } pHPageBreaks.Release(); I expect this to print out the row numbers of each of the horizontal page breaks in pSheet. The problem I'm having is that although count correctly indicates the number of page breaks in the worksheet, I can only ever seem to retrieve the first one. On the second run through the loop, calling pHPageBreaks->GetItem(i) throws an exception, with error number 0x8002000b, "invalid index". Attempting to use pHPageBreaks->Get_NewEnum() to get an enumerator to iterate over the collection also fails with the same error, immediately on the call to Get_NewEnum(). I've looked around for a solution, and the closest thing I've found so far is http://support.microsoft.com/kb/210663/en-us. I have tried activating various cells beyond the page breaks, including the cells just beyond the range to be printed, as well as the lower-right cell (IV65536), but it didn't help. If somebody can tell me how to get Excel to return the locations of all of the page breaks in a sheet, that would be awesome! Thank you. @Joel: Yes, I have tried displaying the user interface, and then setting ScreenUpdating to true - it produced the same results. Also, I have since tried combinations of setting pSheet->PrintArea to the entire worksheet and/or calling pSheet->ResetAllPageBreaks() before my call to get the HPageBreaks collection, which didn't help either. @Joel: I've used pSheet->UsedRange to determine the row to scroll past, and Excel does scroll past all the horizontal breaks, but I'm still having the same issue when I try to access the second one. Unfortunately, switching to Excel 2007 did not help either.

    Read the article

  • Calculate year for end date: PostgreSQL

    - by Dave Jarvis
    Background Users can pick dates as shown in the following screen shot: Any starting month/day and ending month/day combinations are valid, such as: Mar 22 to Jun 22 Dec 1 to Feb 28 The second combination is difficult (I call it the "tricky date scenario") because the year for the ending month/day is before the year for the starting month/day. That is to say, for the year 1900 (also shown selected in the screen shot above), the full dates would be: Dec 22, 1900 to Feb 28, 1901 Dec 22, 1901 to Feb 28, 1902 ... Dec 22, 2007 to Feb 28, 2008 Dec 22, 2008 to Feb 28, 2009 Problem Writing a SQL statement that selects values from a table with dates that fall between the start month/day and end month/day, regardless of how the start and end days are selected. In other words, this is a year wrapping problem. Inputs The query receives as parameters: Year1, Year2: The full range of years, independent of month/day combination. Month1, Day1: The starting day within the year to gather data. Month2, Day2: The ending day within the year (or the next year) to gather data. Previous Attempt Consider the following MySQL code (that worked): end_year = start_year + greatest( -1 * sign( datediff( date( concat_ws('-', year, end_month, end_day ) ), date( concat_ws('-', year, start_month, start_day ) ) ) ), 0 ) How it works, with respect to the tricky date scenario: Create two dates in the current year. The first date is Dec 22, 1900 and the second date is Feb 28, 1900. Count the difference, in days, between the two dates. If the result is negative, it means the year for the second date must be incremented by 1. In this case: Add 1 to the current year. Create a new end date: Feb 28, 1901. Check to see if the date range for the data falls between the start and calculated end date. If the result is positive, the dates have been provided in chronological order and nothing special needs to be done. This worked in MySQL because the difference in dates would be positive or negative. In PostgreSQL, the equivalent functionality always returns a positive number, regardless of their relative chronological order. Question How should the following (broken) code be rewritten for PostgreSQL to take into consideration the relative chronological order of the starting and ending month/day pairs (with respect to an annual temporal displacement)? SELECT m.amount FROM measurement m WHERE (extract(MONTH FROM m.taken) >= month1 AND extract(DAY FROM m.taken) >= day1) AND (extract(MONTH FROM m.taken) <= month2 AND extract(DAY FROM m.taken) <= day2) Any thoughts, comments, or questions? (The dates are pre-parsed into MM/DD format in PHP. My preference is for a pure PostgreSQL solution, but I am open to suggestions on what might make the problem simpler using PHP.) Versions PostgreSQL 8.4.4 and PHP 5.2.10

    Read the article

  • Is One Tool or a Suite of Tools Better for Scrum?

    - by Rob Wells
    G'day, Edit: We've been using Scrum very successfully for several years on several projects of varying sizes. In fact, our team developed the successful iPlayer project for the BBC using a classical Scrum approach. After using various combinations of tools, some high-tech, some low-tech, across these projects we now wish to try adopting a suitable tool suite. Our manager is to some extent attempting to force the adoption of a single suite of tools for Scrum. I've looked at the SO question "Best Scrum tools" and most people seem to recommend either: a suite of low-tech solutions, e.g. whiteboards, post-its, index cards, etc., or a monolithic tool that tries to satisfy as much as possible of the process, e.g. Agilo, Mingle, ScrumWorks, Target Process, etc. Our team is currently evaluating several different Scrum tools. However, we are looking at selecting a single, monolithic tool, e.g. Agilo. All of the "one-stop" solutions have their strengths and weaknesses with the serious enterprise type solutions being the best sort of fit. But all have some short comings. After reading the paper "Peer Code Review: An Agile Process" over at SmartBear I started wondering if we were trying to force adoption of a tool on a "best fit" basis. I think you can take a couple of reference artefacts of the Scrum development process, say user stories, epics and themes, and the code base which must use a well-known SCM, e.g. SVN, Hg, etc. Then if we take that as the common reference points for the tools employed then we would be able to use a group of tools to handle the different aspects of the Scrum process rather than try forcing a fit of a single tool would is a bit like forcing a square peg into the round hole. In this way, providing you've agreed your common reference points, you can use several tools, each performing their role better than a could be done by a single component in a monolithic tool suite. Is this a more sensible approach? Are the two reference points I mentioned above suitable, or is their a better choice of points where the tools would meet? cheers,

    Read the article

  • Can LINQ-to-SQL omit unspecified columns on insert so a database default value is used?

    - by Todd Ropog
    I have a non-nullable database column which has a default value set. When inserting a row, sometimes a value is specified for the column, sometimes one is not. This works fine in TSQL when the column is omitted. For example, given the following table: CREATE TABLE [dbo].[Table1]( [id] [int] IDENTITY(1,1) NOT NULL, [col1] [nvarchar](50) NOT NULL, [col2] [nvarchar](50) NULL, CONSTRAINT [PK_Table1] PRIMARY KEY CLUSTERED ([id] ASC) ) GO ALTER TABLE [dbo].[Table1] ADD CONSTRAINT [DF_Table1_col1] DEFAULT ('DB default') FOR [col1] The following two statements will work: INSERT INTO Table1 (col1, col2) VALUES ('test value', '') INSERT INTO Table1 (col2) VALUES ('') In the second statement, the default value is used for col1. The problem I have is when using LINQ-to-SQL (L2S) with a table like this. I want to produce the same behavior, but I can't figure out how to make L2S do that. I want to be able to run the following code and have the first row get the value I specify and the second row get the default value from the database: var context = new DataClasses1DataContext(); var row1 = new Table1 { col1 = "test value", col2 = "" }; context.Table1s.InsertOnSubmit(row1); context.SubmitChanges(); var row2 = new Table1 { col2 = "" }; context.Table1s.InsertOnSubmit(row2); context.SubmitChanges(); If the Auto Generated Value property of col1 is False, the first row is created as desired, but the second row fails with a null error on col1. If Auto Generated Value is True, both rows are created with the default value from the database. I've tried various combinations of Auto Generated Value, Auto-Sync and Nullable, but nothing I've tried gives the behavior I want. L2S does not omit the column from the insert statement when no value is specified. Instead it does something like this: INSERT INTO Table1 (col1, col2) VALUES (null, '') ...which of course causes a null error on col1. Is there some way to get L2S to omit a column from the insert statement if no value is given? Or is there some other way to get the behavior I want? I need the default value at the database level because not all row inserts are done via L2S, and in some cases the default value is a little more complex than a hard coded value (e.g. creating the default based on another field) so I'd rather avoid duplicating that logic.

    Read the article

  • How to create a compiler in vb.net

    - by Cyclone
    Before answering this question, understand that I am not asking how to create my own programming language, I am asking how, using vb.net code, I can create a compiler for a language like vb.net itself. Essentially, the user inputs code, they get a .exe. By NO MEANS do I want to write my own language, as it seems other compiler related questions on here have asked. I also do not want to use the vb.net compiler itself, nor do I wish to duplicate the IDE. The exact purpose of what I wish to do is rather hard to explain, but all I need is a nudge in the right direction for writing a compiler (from scratch if possible) which can simply take input and create a .exe. I have opened .exe files as plain text before (my own programs) to see if I could derive some meaning from what I assumed would be human readable text, yet I was obviously sorely disappointed to see the random ascii, though it is understandable why this is all I found. I know that a .exe file is simply lines of code, being parsed by the computer it is on, but my question here really boils down to this: What code makes up a .exe? How could I go about making one in a plain text editor if I wanted to? (No, I do not want to do that, but if I understand the process my goals will be much easier to achieve.) What makes an executable file an executable file? Where does the logic of the code fit in? This is intended to be a programming question as opposed to a computer question, which is why I did not post it on SuperUser. I know plenty of information about the System.IO namespace, so I know how to create a file and write to it, I simply do not know what exactly I would be placing inside this file to get it to work as an executable file. I am sorry if this question is "confusing", "dumb", or "obvious", but I have not been able to find any information regarding the actual contents of an executable file anywhere. One of my google searches Something that looked promising EDIT: The second link here, while it looked good, was an utter fail. I am not going to waste hours of my time hitting keys and recording the results. "Use the "Alt" and the 3-digit combinations to create symbols that do not appear on the keyboard but that you need in the program." (step 4) How the heck do I know what symbols I need??? Thank you very much for your help, and my apologies if this question is a nooby or "bad" one. To sum this up simply: I want to create a program in vb.net that can compile code in a particular language to a single executable file. What methods exist that can allow me to do this, and if there are none, how can I go about writing my own from scratch?

    Read the article

  • How to code a keyboard button to switch between 2 modes?

    - by le.shep20
    Hi! i'm doing a project, i'm not going to details but i will simplify my idea, i'm using Morse Code ( dot and dash) and i have 2 methods: convert_MorseToChar() and Convert_MorseTonum() in the convert_MorseToChar() method there is swich to compare the input from a user which will be Morse codes and mapping it to characters: private String convert_MorseToChar(ref string Ch) { switch (Ch) { Case ".-": MorsetoChar = "a" break; Case "-...": MorsetoChar = "b" break; Case "-.-.": MorsetoChar = "c" break; Case "-..": MorsetoChar = "d" break; Case ".": MorsetoChar = "e" break; } } and the other method Convert_MorseToNum(), ues the SAME combinations of Morse codes but mapping them to numbers: private String Convert_MorseToNum(ref string Ch) { switch (Ch) { Case ".-": MorsetoChar = "1" break; Case "-...": MorsetoChar = "2" break; Case "-.-.": MorsetoChar = "3" break; Case "-..": MorsetoChar = "4" break; Case ".": MorsetoChar = "5" break; } } now the senario is: there are 2 Textbox, one the user will write Morse codes in it and the other is for the output. The user will write dot "." and dash "-" from the keyboard and press Enter then the program will go to ONE of the 2 methods to convert the Morse codes. Now what tells the program where to go to convert?? my question is: I want to create mode key to swich between 2 modes: MorseTochar and MorseToNum. i want the down arrow key to act like a mode, when a user press the down arrow then it the program will be in MorseToChar mode, when ever the user input the program directly use the method convert_MorseToChar to convert to characters. and when the user press the down arrow agian, the prohram will swich to MorseToNum mode here when ever the user input as morsecode, the program will directly use the method Convert_MorseToNum() to convert to numbers. HOW I CAN DO THAT Pleaaaas!!! help me! Please excuse my English, English is not my native language :)

    Read the article

  • I just can't kill Java thread.

    - by Adrian
    I have a thread that downloads some images from internet using different proxies. Sometimes it hangs, and can't be killed by any means. public HttpURLConnection uc; public InputStream in; Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress("server", 8080)); URL url = new URL("http://images.com/image.jpg"); uc = (HttpURLConnection)url.openConnection(proxy); uc.setConnectTimeout(30000); uc.setAllowUserInteraction(false); uc.setDoOutput(true); uc.addRequestProperty("User-Agent","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)"); uc.connect(); in = uc.getInputStream(); When it hangs, it freezes at the uc.getInputStream() method. I made a timer which tries to kill the thread if it's run time exceeds 3 minutes. I tried .terminate() the thread. No effect. I tried uc.disconnect() from the main thread. The method also hangs and with it, the main thread. I tried in.close(). No effect. I tried uc=null, in=null hoping for an exception that will end the thread. It keeps running. It never passes the uc.getInputStream() method. In my last test the thread lasted over 14 hours after receiving all above commands (or various combinations). I had to kill the Java process to stop the thread. If I just ignore the thread, and set it's instance to null, the thread doesn't die and is not cleaned by garbage collector. I know that because if I let the application running for several days, the Java process takes more and more system memory. In 3 days it took 10% of my 8Gb. RAM system. It is impossible to kill a thread whatever?

    Read the article

  • Data aggregation mongodb vs mysql

    - by Dimitris Stefanidis
    I am currently researching on a backend to use for a project with demanding data aggregation requirements. The main project requirements are the following. Store millions of records for each user. Users might have more than 1 million entries per year so even with 100 users we are talking about 100 million entries per year. Data aggregation on those entries must be performed on the fly. The users need to be able to filter on the entries by a ton of available filters and then present summaries (totals , averages e.t.c) and graphs on the results. Obviously I cannot precalculate any of the aggregation results because the filter combinations (and thus the result sets) are huge. Users are going to have access on their own data only but it would be nice if anonymous stats could be calculated for all the data. The data is going to be most of the time in batch. e.g the user will upload the data every day and it could like 3000 records. In some later version there could be automated programs that upload every few minutes in smaller batches of 100 items for example. I made a simple test of creating a table with 1 million rows and performing a simple sum of 1 column both in mongodb and in mysql and the performance difference was huge. I do not remember the exact numbers but it was something like mysql = 200ms , mongodb = 20 sec. I have also made the test with couchdb and had much worse results. What seems promising speed wise is cassandra which I was very enthusiastic about when I first discovered it. However the documentation is scarce and I haven't found any solid examples on how to perform sums and other aggregate functions on the data. Is that possible ? As it seems from my test (Maybe I have done something wrong) with the current performance its impossible to use mongodb for such a project although the automated sharding functionality seems like a perfect fit for it. Does anybody have experience with data aggregation in mongodb or have any insights that might be of help for the implementation of the project ? Thanks, Dimitris

    Read the article

  • Algorithm to retrieve every possible combination of sublists of a two lists

    - by sgmoore
    Suppose I have two lists, how do I iterate through every possible combination of every sublist, such that each item appears once and only once. I guess an example could be if you have employees and jobs and you want split them into teams, where each employee can only be in one team and each job can only be in one team. Eg List<string> employees = new List<string>() { "Adam", "Bob"} ; List<string> jobs = new List<string>() { "1", "2", "3"}; I want Adam : 1 Bob : 2 , 3 Adam : 1 , 2 Bob : 3 Adam : 1 , 3 Bob : 2 Adam : 2 Bob : 1 , 3 Adam : 2 , 3 Bob : 1 Adam : 3 Bob : 1 , 2 Adam, Bob : 1, 2, 3 I tried using the answer to this stackoverflow question to generate a list of every possible combination of employees and every possible combination of jobs and then select one item from each from each list, but that's about as far as I got. I don't know the maximum size of the lists, but it would be certainly be less than 100 and there may be other limiting factors (such as each team can have no more than 5 employees) Update Not sure whether this can be tidied up more and/or simplified, but this is what I have ended up with so far. It uses the Group algorithm supplied by Yorye (see his answer below), but I removed the orderby which I don't need and caused problems if the keys are not comparable. var employees = new List<string>() { "Adam", "Bob" } ; var jobs = new List<string>() { "1", "2", "3" }; int c= 0; foreach (int noOfTeams in Enumerable.Range(1, employees.Count)) { var hs = new HashSet<string>(); foreach( var grouping in Group(Enumerable.Range(1, noOfTeams).ToList(), employees)) { // Generate a unique key for each group to detect duplicates. var key = string.Join(":" , grouping.Select(sub => string.Join(",", sub))); if (!hs.Add(key)) continue; List<List<string>> teams = (from r in grouping select r.ToList()).ToList(); foreach (var group in Group(teams, jobs)) { foreach (var sub in group) { Console.WriteLine(String.Join(", " , sub.Key ) + " : " + string.Join(", ", sub)); } Console.WriteLine(); c++; } } } Console.WriteLine(String.Format("{0:n0} combinations for {1} employees and {2} jobs" , c , employees.Count, jobs.Count)); Since I'm not worried about the order of the results, this seems to give me what I need.

    Read the article

  • JDBC CommunicationsException with MySQL Database

    - by Dominik Siebel
    I'm having a little trouble with my MySQL- Connection- Pooling. This is the case: Different jobs are scheduled via Quartz. All jobs connect to different databases which works fine the whole day while the nightly scheduled jobs fail with a CommunicationsException... Quartz-Jobs: Job1 runs 0 0 6,10,14,18 * * ? Job2 runs 0 30 10,18 * * ? Job3 runs 0 0 5 * * ? As you can see the last job runs at 18 taking about 1 hour to run. The first job at 5am is the one that fails. I already tried all kinds of parameter-combinations in my resource config this is the one I am running right now: <!-- Database 1 (MySQL) --> <Resource auth="Container" driverClassName="com.mysql.jdbc.Driver" maxActive="100" maxIdle="30" maxWait="10000" removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true" type="javax.sql.DataSource" name="jdbc/appDbProd" username="****" password="****" url="jdbc:mysql://127.0.0.1:3306/appDbProd?autoReconnect=true&amp;useUnicode=true&amp;characterEncoding=UTF-8" testWhileIdle="true" testOnBorrow="true" testOnReturn="true" validationQuery="SELECT 1" timeBetweenEvictionRunsMillis="1800000" /> <!-- Database 2 (MySQL) --> <Resource auth="Container" driverClassName="com.mysql.jdbc.Driver" maxActive="100" maxIdle="30" maxWait="10000" removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true" type="javax.sql.DataSource" name="jdbc/prodDbCopy" username="****" password="****" url="jdbc:mysql://127.0.0.1:3306/prodDbCopy?autoReconnect=true&amp;useUnicode=true&amp;characterEncoding=UTF-8" testWhileIdle="true" testOnBorrow="true" testOnReturn="true" validationQuery="SELECT 1" timeBetweenEvictionRunsMillis="1800000" /> <!-- Database 3 (MSSQL)--> <Resource auth="Container" driverClassName="net.sourceforge.jtds.jdbc.Driver" maxActive="30" maxIdle="30" maxWait="100" removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true" name="jdbc/catalogDb" username="****" password="****" type="javax.sql.DataSource" url="jdbc:jtds:sqlserver://127.0.0.1:1433;databaseName=catalog;useNdTLMv2=false" testWhileIdle="true" testOnBorrow="true" testOnReturn="true" validationQuery="SELECT 1" timeBetweenEvictionRunsMillis="1800000" /> For obvious reasons I changed IPs, Usernames and Passwords but they can be assumed to be correct, seeing that the application runs successfully the whole day. The most annoying thing is: The first job that runs first queries Database2 successfully but fails to query Database1 for some reason (CommunicationsException): Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 39,376,539 milliseconds ago. The last packet sent successfully to the server was 39,376,539 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. Any ideas? Thanks!

    Read the article

  • Segmenting a double array of labels

    - by Ami
    The Problem: I have a large double array populated with various labels. Each element (cell) in the double array contains a set of labels and some elements in the double array may be empty. I need an algorithm to cluster elements in the double array into discrete segments. A segment is defined as a set of pixels that are adjacent within the double array and one label that all those pixels in the segment have in common. (Diagonal adjacency doesn't count and I'm not clustering empty cells). |-------|-------|------| | Jane | Joe | | | Jack | Jane | | |-------|-------|------| | Jane | Jane | | | | Joe | | |-------|-------|------| | | Jack | Jane | | | Joe | | |-------|-------|------| In the above arrangement of labels distributed over nine elements, the largest cluster is the “Jane” cluster occupying the four upper left cells. What I've Considered: I've considered iterating through every label of every cell in the double array and testing to see if the cell-label combination under inspection can be associated with a preexisting segment. If the element under inspection cannot be associated with a preexisting segment it becomes the first member of a new segment. If the label/cell combination can be associated with a preexisting segment it associates. Of course, to make this method reasonable I'd have to implement an elaborate hashing system. I'd have to keep track of all the cell-label combinations that stand adjacent to preexisting segments and are in the path of the incrementing indices that are iterating through the double array. This hash method would avoid having to iterate through every pixel in every preexisting segment to find an adjacency. Why I Don't Like it: As is, the above algorithm doesn't take into consideration the case where an element in the double array can be associated with two unique segments, one in the horizontal direction and one in the vertical direction. To handle these cases properly, I would need to implement a test for this specific case and then implement a method that will both associate the element under inspection with a segment and then concatenate the two adjacent identical segments. On the whole, this method and the intricate hashing system that it would require feels very inelegant. Additionally, I really only care about finding the large segments in the double array and I'm much more concerned with the speed of this algorithm than with the accuracy of the segmentation, so I'm looking for a better way. I assume there is some stochastic method for doing this that I haven't thought of. Any suggestions?

    Read the article

  • SET game odds simulation (MATLAB)

    - by yuk
    Here is an interesting problem for your weekend. :) I recently find the great card came - SET. Briefly, there are 81 cards with the four features: symbol (oval, squiggle or diamond), color (red, purple or green), number (one, two or three) or shading (solid, striped or open). The task is to find (from selected 12 cards) a SET of 3 cards, in which each of the four features is either all the same on each card or all different on each card (no 2+1 combination). In my free time I've decided to code it in MATLAB to find a solution and to estimate odds of having a set in randomly selected cards. Here is the code: %% initialization K = 12; % cards to draw NF = 4; % number of features (usually 3 or 4) setallcards = unique(nchoosek(repmat(1:3,1,NF),NF),'rows'); % all cards: rows - cards, columns - features setallcomb = nchoosek(1:K,3); % index of all combinations of K cards by 3 %% test tic NIter=1e2; % number of test iterations setexists = 0; % test results holder % C = progress('init'); % if you have progress function from FileExchange for d = 1:NIter % C = progress(C,d/NIter); % cards for current test setdrawncardidx = randi(size(setallcards,1),K,1); setdrawncards = setallcards(setdrawncardidx,:); % find all sets in current test iteration for setcombidx = 1:size(setallcomb,1) setcomb = setdrawncards(setallcomb(setcombidx,:),:); if all(arrayfun(@(x) numel(unique(setcomb(:,x))), 1:NF)~=2) % test one combination setexists = setexists + 1; break % to find only the first set end end end fprintf('Set:NoSet = %g:%g = %g:1\n', setexists, NIter-setexists, setexists/(NIter-setexists)) toc 100-1000 iterations are fast, but be careful with more. One million iterations takes about 15 hours on my home computer. Anyway, with 12 cards and 4 features I've got around 13:1 of having a set. This is actually a problem. The instruction book said this number should be 33:1. And it was recently confirmed by Peter Norvig. He provides the Python code, but I didn't test it. So can you find an error?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44  | Next Page >