Search Results

Search found 4157 results on 167 pages for 'zero subnet'.

Page 109/167 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • iPhone UITableView populating variable rows sections from flat array

    - by Biko
    I thought that would be very common and easy iPhone App. In the main app I connect to database, retrieve values from database (NSDate converted to NSString)n and put into single array. The in one of the views I populate UITableView with elements from the array. UITableView is grouped (sections). I step through array to discover number of sections (change section if new day). How do I retrieve correct element of array in cellForRowAtIndexPath? IndexPath.section and IndexPath.row seem useless as row starts count from zero for each section. If number of rows in each section was the same it would have been easy: [arryData objectAtIndex:(indexPath.row)+indexPath.section*[tblMatchesView numberOfRowsInSection:indexPath.section]]; But number of rows in each section varies... :-)

    Read the article

  • Range annotation between nothing and 100?

    - by aticatac
    Hi I have a [Range] annotation that looks like this: [Range(0, 100)] public int AvailabilityGoal { get; set; } It works as it should, I can only enter values between 0 and 100 but I also want the input box to be optional, the user shouldn't get an validation error if the input box is empty. If the user leaves it empty it should make AvailabilityGoal = 0 but I don't want to force the user to enter a zero. I tried this but it (obviously) didn't work: [Range(typeof(int?), null, "100")] Is it possible to solve this with Data Annotations or in some other way? Thanks in advance. Bobby

    Read the article

  • How do I pass a DBNull value to a parameterized SELECT statement?

    - by Dan
    I have a SQL statement in C# (.NET Framework 4 running against SQL Server 2k8) that looks like this: SELECT [Column1] FROM [Table1] WHERE [Column2] = @Column2 The above query works fine with the following ADO.NET code: DbParameter parm = Factory.CreateDbParameter(); parm.Value = "SomeValue"; parm.ParameterName = "@Column2"; //etc... This query returns zero rows, though, if I assign DBNull.Value to the DbParameter's Value member even if there are null values in Column2. If I change the query to accommodate the null test specifically: SELECT [Column1] FROM [Table1] WHERE [Column2] IS @Column2 I get an "Incorrect syntax near '@Column2'" exception at runtime. Is there no way that I can use null or DBNull as a parameter in the WHERE clause of a SELECT statement?

    Read the article

  • Problem with Silverlight/wpf in scrolling html div.

    - by Mat
    Hi all, I have a Silverlight object sitting at the bottom of a scrollable div. This object is submitted to a wcf backend via a javascript button. The problem is, as the silverlight is at the bottom of the scrollable div it is not viewable until you have scrolled down. This is generating an error when the javascript button is clicked ( if i havent scrolled down ) awfully strange, or am i just an idiot :/ if i scroll down so the silverlight object, so it is in view it submits just fine. The error i got is an alert type error which says : The parameter value must be greater than zero. Parameter name: pixelWidth This seems to be returned from the wcf service. What could cause this? Can anyone help me rectify. Kind regards Mat.

    Read the article

  • Tool to diagonalize large matrices

    - by Xodarap
    I want to compute a diffusion kernel, which involves taking exp(b*A) where A is a large matrix. In order to play with values of b, I'd like to diagonalize A (so that exp(A) runs quickly). My matrix is about 25k x 25k, but is very sparse - only about 60k values are non-zero. Matlab's "eigs" function runs of out memory, as does octave's "eig" and R's "eigen." Is there a tool to find the decomposition of large, sparse matrices? Dunno if this is relevant, but A is an adjacency matrix, so it's symmetric, and it is full rank.

    Read the article

  • Averaging corrupted images to eliminate the noise in Matlab

    - by Mertie Pertie
    Hi all As you can get it from the title, I want to average some .jpg images which are corrupted by zero-mean Gaussian additive. After searching over internet, I figured out to add image matrices and divide the sum by the # of matrices. However the resultant image is totally black. Normally when the number of image increases then the resultant image gets better. But When I use more images it gets darker. I am using 800x600 black and white images with .jpg ext Here is the script I used; image1 = imread ('PIC1.jpg'); image2 = imread ('PIC2.jpg'); image3 = imread ('PIC3.jpg'); image4 = imread ('PIC4.jpg'); sum = image1 + image2 + image3 + image4; av = sum / 4; imshow(av); Thanks in advance

    Read the article

  • Strange findFn malfunction

    - by gd047
    I noticed a strange malfunction in using findFn function (library sos) and I can't find out the source. While it works fine on my Windows XP pc, it does not on my Vista one. library (sos) findFn("randomization test") # in both finds 72 results findFn("{randomization test}") # In XP finds 19 or about so, but in Vista whenever I use {} and more than one word inside, # I keep getting the following: found 0 matches x has zero rows; nothing to display. Warning message: In findFn("{randomization test}") : HIT not found in HTML; processing one page only. R ver = 2.10.1 and packages updated. Any ideas where the problem might be? Bonus: As it's obvious, I was looking for functions about tests for randomized experiments

    Read the article

  • Windows Server 2008 R2 Virtual Network Setup

    - by jpearl01
    Some background: I'm very much new to networking in general, and virtualization in particular. I'm trying to set up a series of VMs as we are transitioning to a thin client setup. I have been supplied a limited number of static ip addresses. The server is located in an offsite building which houses the network we use to connect to the internet, share folders etc. The setup I've been trying to go for is this: The host OS (Windows Server 2008 R2) is bound to one nic using one of the static ips (say, Nic1 and ip 10.255.6.61). I've set up another external virtual network attached to another physical nic , and a virtual private network attached to no nic. There is one VM running the same os (as the host). This VM is connected to both the external virtual network (and uses another static ip say Nic2 and ip 10.255.6.62) and also to the virtual private network (I gave it a static random ip 192.168.88.1 subnet mask 255.255.255.0). This virtual private network is connected to all the other VMs. I'd like to share the internet connection with all the other VMs on the private virtual network, and so I installed the RRAS role on the server connected to Nic2, and selected the option to share the internet over the vpn. I've run through the RRAS wizard a few times, trying different configurations, but none of them seem to be letting the other vms connect to the 'net. The vms seem to connect to the virtual private network fine, they are assigned an ip address and everything, but no internet, and no rest of the network either. The other problem is in general I connect to the vms with RDP. Will that be possible with a setup like this? i.e. will the vms show up as computers on the network? If not, what are my other options? Thanks! ~josh

    Read the article

  • need to query MongoDB using php

    - by Mario Villarroel
    I need to query mongodb with something like this: ("something" < X OR "something" = "nll") AND ("someother"X OR "someother"= "nll") AND z=$z AND s=1 I've tried a few things, but can't get it to work, this is what I've tried: find( array( '$or'=array(array("something"=array("$le",$X)),array("something"="nll")), '$or'=array(array("someother"=array("$ge",$X)),array("someother"="nll")) )) But that's getting me the OR overwritten, so I'm lost on that... After diggin a bit more, I assembled this code that seems to be what I need, but doesn't work either: find( array('$and'=array( array( '$or' = array( array("something"=array('$gte'=$X)),array("something"="nll"))), array('$or' = array( array("someother"=array('$lte'=$X)),array("someother"="nll")))),"Z"=$z, "s"="1"); But this doesn't work as it returns zero results and I know for sure that there are more than 2 items that match on the db. (100% certain)

    Read the article

  • Cannot read app.config, why???

    - by user46503
    Hello, I'm trying to get data from app.config and I always get zero. The App.config is here: <?xml version="1.0" encoding="utf-8"?> <configuration> <connectionStrings> <add name="ExplorerContext" connectionString="metadata=res://*/ExplorerData.csdl|res://*/ExplorerData.ssdl|res://*/ExplorerData.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=MYT\SQLEXPRESS;Initial Catalog=Explorer;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> </configuration> Could someone explain what is wrong, why I cannot get the values, System.Configuration.ConfigurationManager.AppSettings.Count is always 0 Thanks

    Read the article

  • Why does a C# System.Decimal remember trailing zeros?

    - by Rob Davey
    Is there a reason that a C# System.Decimal remembers the number of trailing zeros it was entered with? See the following example: public void DoSomething() { decimal dec1 = 0.5M; decimal dec2 = 0.50M; Console.WriteLine(dec1); //Output: 0.5 Console.WriteLine(dec2); //Output: 0.50 Console.WriteLine(dec1 == dec2); //Output: True } The decimals are classed as equal, yet dec2 remembers that it was entered with an additional zero. What is the reason/purpose for this?

    Read the article

  • network routing between mac & virtual XP

    - by Kevin
    Hi - I have a max laptop running XP inside VirtualBox. The network is setup to be a "Bridged Adapter" so that the IPs for both the host & guest OS's are assigned by my wireless routed. My guest XP has Nortel VPN connecting to corporate lan. When this is connected, I want to allow my host Mac OS to access the corporate network. But I'm struggling. Without Nortel VPN running, I can change routing on the mac so all traffic is sent via the guest XP - this works. But once I activate the VPN, this no longer works. If I try to change the routing on mac to run through the IP address assigned to the Nortel adapter, I get a "Network is unreachable" error. Below is the output from ipconfig /all on the guest XP OS. I'm beginning to believe that what I want to do is not possible because of the way Nortel secure the VPN - but before I give up I thought I'd post the problem here. Thanks, Kevin z:\eclipseworkspace\RESMobileSuite\trunk>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : zzzz-3177b42dd0 Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : Yes WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : zzzz.zzz Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : AMD PCNET Family PCI Ethernet Adapter Physical Address. . . . . . . . . : 08-00-XX-XX-XX-XX Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 192.168.1.3 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 DHCP Server . . . . . . . . . . . : 192.168.1.1 DNS Servers . . . . . . . . . . . : 192.168.1.1 Lease Obtained. . . . . . . . . . : 30 April 2010 12:22:02 Lease Expires . . . . . . . . . . : 01 May 2010 12:22:02 Ethernet adapter {8EB7A442-9683-45FB-A602-56110A4B3434}: Connection-specific DNS Suffix . : zzzz.zz Description . . . . . . . . . . . : Nortel IPSECSHM Adapter - Packet Scheduler Miniport Physical Address. . . . . . . . . : 44-45-YY-YY-YY-YY Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : XXX.4.52.62 Subnet Mask . . . . . . . . . . . : 255.255.254.0 Default Gateway . . . . . . . . . : XXX.4.52.62 DNS Servers . . . . . . . . . . . : XXX.6.21.36 XXX.6.21.100

    Read the article

  • iptables firewall rules not allowing ssh from lan to DMZ

    - by ageis23
    Chain INPUT (policy ACCEPT) target prot opt source destination REJECT tcp -- anywhere anywhere tcp dpt:www reject-with tcp-reset REJECT tcp -- anywhere anywhere tcp dpt:telnet reject-with tcp-reset ACCEPT 0 -- anywhere anywhere state RELATED,ESTABLISHED DROP udp -- anywhere anywhere udp dpt:route DROP udp -- anywhere anywhere udp dpt:route ACCEPT udp -- anywhere anywhere udp dpt:route logdrop icmp -- anywhere anywhere logdrop igmp -- anywhere anywhere ACCEPT udp -- anywhere anywhere udp dpt:5060 ACCEPT 0 -- anywhere anywhere state NEW logaccept 0 -- anywhere anywhere state NEW ACCEPT 0 -- anywhere anywhere ACCEPT 0 -- anywhere anywhere ACCEPT 0 -- anywhere anywhere logdrop 0 -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT 0 -- 192.168.0.0/24 192.168.2.0/24 reject-with icmp-port-unreachable ACCEPT tcp -- choister 192.168.2.142 tcp dpt:ssh state NEW REJECT 0 -- 192.168.0.0/24 192.168.3.0/24 reject-with icmp-port-unreachable ACCEPT gre -- 192.168.1.0/24 anywhere ACCEPT tcp -- 192.168.1.0/24 anywhere tcp dpt:1723 ACCEPT 0 -- anywhere anywhere ACCEPT 0 -- anywhere anywhere ACCEPT 0 -- anywhere anywhere ACCEPT 0 -- anywhere anywhere TCPMSS tcp -- anywhere anywhere tcp flags:SYN,RST/SYN TCPMSS clamp to PMTU lan2wan 0 -- anywhere anywhere ACCEPT 0 -- anywhere anywhere state RELATED,ESTABLISHED logaccept tcp -- anywhere choister tcp dpt:www TRIGGER 0 -- anywhere anywhere TRIGGER type:in match:0 relate:0 trigger_out 0 -- anywhere anywhere logaccept 0 -- anywhere anywhere state NEW logdrop 0 -- anywhere anywhere The ssh server I'm trying to connect to is in the DMZ(192.168.0.145). It's mainly used as a web server. I need access to it from my room 192.168.2.142. I don't get why ssh can't forward onto the 192.168.2.0 subnet? I'm sure it's the reject rule that causing this because it works without it.

    Read the article

  • Can't reliably ping 6224 router from directly-attached system

    - by David Mackintosh
    OK, here's my situation. This is on the internet. The 6224 is the router in this picture and physically resides in Kanata. Both VLAN 1697 and 3994 are provided by an internet service provider. These VLANs are provided through a single 1Gb ethernet wire. The Kanata hosts are directly attached to the 6224; the other two sites are remote. VLAN 3994 is a single IP address space, so theoretically it shouldn't matter physically where the hosts on that subnet are. Here's the problem. I have a monitoring system which is connected further into the internet, so probes from the monitor would come in to this diagram on the 1697 VLAN. When I ping hosts at Albert or Bells Corners from the internet, there is 0 loss. The connection looks perfect. When I ping hosts at Kanata, I lose anywhere from 10 to 40% of the pings. The loss is not predictable, but: when I do lose them, I always lose at least 3, usually 4, rarely more, pings in a bunch. I have attached a monitor directly to the 6224 in Kanata on 3994.. When the monitor pings the 6224 routing interface, I see exactly the same loss pattern -- but NOT at the same time as the loss from the remote system. Ping time is around 1ms. When the monitor pings another system directly attached to the 6224, there is 0 loss. Ping time is about 0.1ms, one-tenth of the time to ping the router. Anyone know what is going on here?

    Read the article

  • Initializing structs in C++

    - by Neil Butterworth
    As an addendum to this question, what is going on here: #include <string> using namespace std; struct A { string s; }; int main() { A a = {0}; } Obviously, you can't set a std::string to zero. Can someone provide an explanation (backed with references to the C++ Standard, please) about what is actually supposed to happen here? And then explain for example): int main() { A a = {42}; } Are either of these well-defined? Once again an embarrassing question for me - I always give my structs constructors, so the issue has never arisen before.

    Read the article

  • Any good way to set the exit status of a Cocoa application?

    - by buglesareking
    I have a Cocoa app which interacts with a server and displays a GUI. If there is a fatal error, I display an alert and exit. I'd like to set the exit status to a non-zero value to reflect that an error occurred, for ease of interaction with some other UNIX based tools. Unfortunately I've been unable to find a good way to do so - NSApplication doesn't seem to have any way to set an exit status. At the moment, I've subclassed NSApplication and added an exitStatus ivar (which I set in my app delegate when necessary), then overridden -terminate: so that it calls exit(exitStatus). This works fine, but it seems a bit grungy to me, not to mention that I may be missing something important that the stadnard `terminate: is doing behind the scenes. I can't call [super terminate:sender] in my subclassed method, because that exit()s without giving me a chance to set the status. Am I missing something obvious?

    Read the article

  • insane transformations of a view

    - by Mike
    I have this view and I do some rotation transformation to it using something like myView.transform = CGAffineTransformMakeRotation(degreesToRadian(90)); //The view was originally at angle 0. at some other point of my code, I would like to scale the view animating it, so I do [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:1.0]; myViews.transform = CGAffineTransformMakeScale(2.0f, 2.0f); [UIView commitAnimations]; but when I do that the animation is performed as the view is at 0 degrees, ignoring the previous transformation. It simply assumes as the view is yet at zero degrees, so, this animation scales the view and rotates it back to 0 degrees (!!!!?????) Is this some bug or am I missing something? thanks.

    Read the article

  • What's a good way to detect wrap-around in a fixed-width message counter?

    - by Kristo
    I'm writing a client application to communicate with a server program via UDP. The client periodically makes requests for data and needs to use the most recent server response. The request message has a 16-bit unsigned counter field that is echoed by the server so I can pair requests with server responses. Since it's UDP, I have to handle the case where server responses arrive out of order (or don't arrive at all). Naively, that means holding on to the highest message counter seen so far and dropping any incoming message with a lower number. But that will fail as soon as we pass 65535 messages and the counter wraps back to zero. Is there a good way to detect (with reasonable probability) that, for example, message 5 actually comes after message 65,000? The implementation language is C++.

    Read the article

  • Style question: Writing "this." before instance variable and methods: good or bad idea?

    - by Uri
    One of my nasty (?) programming habits in C++ and Java is to always precede calls or accesses to members with a this. For example: this.process(this.event). A few of my students commented on this, and I'm wondering if I am teaching bad habits. My rationale is: 1) Makes code more readable — Easier to distinguish fields from local variables. 2) Makes it easier to distinguish standard calls from static calls (especially in Java) 3) Makes me remember that this call (unless the target is final) could end up on a different target, for example in an overriding version in a subclass. Obviously, this has zero impact on the compiled program, it's just readability. So am I making it more or less readable? Related Question Note: I turned it into a CW since there really isn't a correct answer.

    Read the article

  • Need to read .symtab

    - by user361190
    I am frustrated. I have a simple doubt .. I compile a simple program with gcc and if I see the section header using objdump, it does not show the section ".symtab". for the same a.out file, readelf shows the section. see the below snip - [25] .symtab SYMTAB 00000000 000ca4 000480 10 26 2c 4 [26] .strtab STRTAB 00000000 001124 00025c 00 0 0 1 Why ? In the default linker script I don't find a definition for .symtab section. If I add a definition by myself like in the linker script : .... PROVIDE(__start_sym) .symtab : { *(.symtab)} PROVIDE(__end_sym) .... the difference b/w the addresses of __start_sym and __end_sym is zero, which means no such section is added in the output file. But the readelf is able to read the section and dump the contents of this section .. How ? why ?

    Read the article

  • Cisco access-list confusion

    - by LonelyLonelyNetworkN00b
    I'm having troubles implementing access-lists on my asa 5510 (8.2) in a way that makes sense for me. I have one access-list for every interface i have on the device. The access-lists are added to the interface via the access-group command. let's say I have these access-lists access-group WAN_access_in in interface WAN access-group INTERNAL_access_in in interface INTERNAL access-group Production_access_in in interface PRODUCTION WAN has security level 0, Internal Security level 100, Production has security level 50. What i want to do is have an easy way to poke holes from Production to Internal. This seams to be pretty easy, but then the whole notion of security levels doesn't seam to matter any more. I then can't exit out the WAN interface. I would need to add an ANY ANY access-list, which in turn opens access completely for the INTERNAL net. I could solve this by issuing explicit DENY ACEs for my internal net, but that sounds like quite the hassle. How is this done in practice? In iptables i would use a logic of something like this. If source equals production-subnet and outgoing interface equals WAN. ACCEPT.

    Read the article

  • MySQL load data null values

    - by SP1
    Hello, I have a file that can contain from 3 to 4 columns of numerical values which are separated by comma. Empty fields are defined with the exception when they are at the end of the row: 1,2,3,4,5 1,2,3,,5 1,2,3 The following table was created in MySQL: +-------+--------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+--------+------+-----+---------+-------+ | one | int(1) | YES | | NULL | | | two | int(1) | YES | | NULL | | | three | int(1) | YES | | NULL | | | four | int(1) | YES | | NULL | | | five | int(1) | YES | | NULL | | +-------+--------+------+-----+---------+-------+ I am trying to load the data using MySQL LOAD command: load data infile '/tmp/testdata.txt' into table moo fields terminated by "," lines terminated by "\n"; The resulting table: +------+------+-------+------+------+ | one | two | three | four | five | +------+------+-------+------+------+ | 1 | 2 | 3 | 4 | 5 | | 1 | 2 | 3 | 0 | 0 | | 1 | 2 | 3 | NULL | NULL | +------+------+-------+------+------+ The problem lies with the fact that when a field is empty in the raw data and is not defined, MySQL for some reason does not use the columns default value (which is NULL) and uses zero. NULL is used correctly when the field is missing alltogether. Unfortunately, I have to be able to distinguish between NULL and 0 at this stage so any help would be appreciated. Thanks S.

    Read the article

  • How to replace values in multi-valued ESE column?

    - by Soonts
    I have a multi-valued short ASCII text column in one of the tables in my ESE database, that holds the person's phone numbers. I have the new set of values, and I'd like to wipe the old values completely, and only use the new values. The JET_bitSetRevertToDefaultValue bit doesn't seem to work. While the MSDN documentation says "It causes the column to return the default column value on subsequent retrieve column operations. All existing column values are removed.", I found that it does nothing (no return value is returned). Or, is there an easy way to find out how many values does the column contain (this could be zero, e.g. when I'm doing an insertion, not update)? If it was, I could just run a loop from 'nValues' to 1, erasing the value by setting it to the null while providing the itagSequence value, to achieve what I want. I'm programming C#, and using the latest version of ManagedEsent library. Thanks in advance!

    Read the article

  • Does Active Directory on Server 2003 R2 support IPv6 subnets in Sites and Services?

    - by NorbyTheGeek
    I've been experimenting with IPv6 at our organization. The domain controllers (all 2003 R2) and most of the servers (2003 R2 / 2008 / 2008 R2) have IPv6 configured. We have a subnet assigned through a tunnel provider. Currently, the only workstation that is running IPv6 is mine. (Windows 7) I have been noticing that my workstation is picking domain controllers in other sites for things like DFS, and I finally realized that I don't have the IPv6 subnets set up in Active Directory Sites and Services (ADSS). But when I try to add a IPv6 prefix in ADSS, it tells me: Windows cannot create the object 2001:xxxx:xxxx:xxxx::/64 because: The object name has bad syntax. I believe I may be using the 2008 version of the admin tools (ADSS reports version 6.1.7601.17514) so I'm wondering if maybe my 2003 R2 Active Directory schema doesn't support configuring IPv6 subnets in ADSS. Is this true? UPDATE Even with 2008 R2 schema in Active Directory, I'm having the same problem. How can I get my IPv6 subnets into Sites and Services?

    Read the article

  • Connection failed between Windows Servers

    - by Kerby82
    I'm setting up an infrastructure based on Windows Server 2012. The firewall is turned off and I can't access the Domain controller to check for the group policy. I'm experiencing some connection problem between servers. All the servers are running a site on the TCP Port 80 and I check with netstat that the web server is binding on every Ip of the servers. If i try to telnet from the server itself on the port 80 it works (using DNS name) if I try same telnet from another machine I get connection failed. The DNS works, the ping is successfull, the servers are on the same subnet, the firewall is turned off (even though windows advanced firewall says that some settings can be managed by the System Administrator, i guess group policy). I don't know how to troubleshoot further. Do you have any idea? Is that possible that the FW looks turned off but there are some group policy blocking the connections? (I also check group policy-Administrative Template-Network Connections- Windows FW everything is not configured) I need some hint on how to keep troubleshooting such a problem.

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >