Search Results

Search found 17913 results on 717 pages for 'old school rules'.

Page 672/717 | < Previous Page | 668 669 670 671 672 673 674 675 676 677 678 679  | Next Page >

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • .NET asmx web services: serialize object property as string property to support versioning

    - by mcliedtk
    I am in the process of upgrading our web services to support versioning. We will be publishing our versioned web services like so: http://localhost/project/services/1.0/service.asmx http://localhost/project/services/1.1/service.asmx One requirement of this versioning is that I am not allowed to break the original wsdl (the 1.0 wsdl). The challenge lies in how to shepherd the newly versioned classes through the logic that lies behind the web services (this logic includes a number of command and adapter classes). Note that upgrading to WCF is not an option at the moment. To illustrate this, let's consider an example with Blogs and Posts. Prior to the introduction of versions, we were passing concrete objects around instead of interfaces. So an AddPostToBlog command would take in a Post object instead of an IPost. // Old AddPostToBlog constructor. public AddPostToBlog(Blog blog, Post post) { // constructor body } With the introduction of versioning, I would like to maintain the original Post while adding a PostOnePointOne. Both Post and PostOnePointOne will implement the IPost interface (they are not extending an abstract class because that inheritance breaks the wsdl, though I suppose there may be a way around that via some fancy xml serialization tricks). // New AddPostToBlog constructor. public AddPostToBlog(Blog blog, IPost post) { // constructor body } This brings us to my question regarding serialization. The original Post class has an enum property named Type. For various cross-platform compatibility issues, we are changing our enums in our web services to strings. So I would like to do the following: // New IPost interface. public interface IPost { object Type { get; set; } } // Original Post object. public Post { // The purpose of this attribute would be to maintain how // the enum currently is serialized even though now the // type is an object instead of an enum (internally the // object actually is an enum here, but it is exposed as // an object to implement the interface). [XmlMagic(SerializeAsEnum)] object Type { get; set; } } // New version of Post object public PostOnePointOne { // The purpose of this attribute would be to force // serialization as a string even though it is an object. [XmlMagic(SerializeAsString)] object Type { get; set; } } The XmlMagic refers to an XmlAttribute or some other part of the System.Xml namespace that would allow me to control the type of the object property being serialized (depending on which version of the object I am serializaing). Does anyone know how to accomplish this?

    Read the article

  • How do I configure OpenVPN for accessing the internet with one NIC?

    - by Lekensteyn
    I've been trying to get OpenVPN to work for three days. After reading many questions, the HOWTO, the FAQ and even parts of a guide to Linux networking, I cannot get my an Internet connection to the Internet. I'm trying to set up a OpenVPN server on a VPS, which will be used for: secure access to the Internet bypassing port restrictions (directadmin/2222 for example) an IPv6 connection (my client does only have IPv4 connectivity, while the VPS has both IPv4 and native IPv6 connectivity) (if possible) I can connect to my server and access the machine (HTTP), but Internet connectivity fails completely. I'm using ping 8.8.8.8 for testing whether my connection works or not. Using tcpdump and iptables -t nat -A POSTROUTING -j LOG, I can confirm that the packets reach my server. If I ping to 8.8.8.8 on the VPS, I get an echo-reply from 8.8.8.8 as expected. When pinging from the client, I do not get an echo-reply. The VPS has only one NIC: etho. It runs on Xen. Summary: I want to have a secure connection between my laptop and the Internet using OpenVPN. If that works, I want to have IPv6 connectivity as well. Network setup and software: Home laptop (eth0: 192.168.2.10) (tap0: 10.8.0.2) | | (running Kubuntu 10.10; OpenVPN 2.1.0-3ubuntu1) | wifi | router/gateway (gateway 192.168.2.1) | INTERNET | VPS (eth0:1.2.3.4) (gateway, tap0: 10.8.0.1) (running Debian 6; OpenVPN 2.1.3-2) wifi and my home router should not cause problems since all traffic goes encrypted over UDP port 1194. I've turned IP forwarding on: # echo 1 > /proc/sys/net/ipv4/ip_forward iptables has been configured to allow forwarding traffic as well: iptables -F FORWARD iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT iptables -A FORWARD -j DROP I've tried each of these rules separately without luck (flushing the chains before executing): iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j SNAT --to 1.2.3.4 iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE route -n before (server): 1.2.3.4 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 1.2.3.4 0.0.0.0 UG 0 0 0 eth0 route -n after (server): 1.2.3.4 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.8.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tap0 0.0.0.0 1.2.3.4 0.0.0.0 UG 0 0 0 eth0 route -n before (client): 192.168.2.0 0.0.0.0 255.255.255.0 U 2 0 0 wlan0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 wlan0 route -n after (client): 1.2.3.4 192.168.2.1 255.255.255.255 UGH 0 0 0 wlan0 10.8.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tap0 192.168.2.0 0.0.0.0 255.255.255.0 U 2 0 0 wlan0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0 0.0.0.0 10.8.0.1 128.0.0.0 UG 0 0 0 tap0 128.0.0.0 10.8.0.1 128.0.0.0 UG 0 0 0 tap0 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 wlan0 SERVER config proto udp dev tap ca ca.crt cert server.crt key server.key dh dh1024.pem server 10.8.0.0 255.255.255.0 push "redirect-gateway def1" ifconfig-pool-persist ipp.txt keepalive 10 120 tls-auth ta.key 0 comp-lzo user nobody group nobody persist-key persist-tun log-append openvpn-log verb 3 mute 10 CLIENT config dev tap proto udp remote 1.2.3.4 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client.crt key client.key ns-cert-type server tls-auth ta.key 1 comp-lzo verb 3 mute 20 traceroute 8.8.8.8 works as expected (similar output without OpenVPN activated): 1 10.8.0.1 (10.8.0.1) 24.276 ms 26.891 ms 29.454 ms 2 gw03.sbp.directvps.nl (178.21.112.1) 31.161 ms 31.890 ms 34.458 ms 3 ge0-v0652.cr0.nik-ams.nl.as8312.net (195.210.57.105) 35.353 ms 36.874 ms 38.403 ms 4 ge0-v3900.cr0.nik-ams.nl.as8312.net (195.210.57.53) 41.311 ms 41.561 ms 43.006 ms 5 * * * 6 209.85.248.88 (209.85.248.88) 147.061 ms 36.931 ms 28.063 ms 7 216.239.49.36 (216.239.49.36) 31.109 ms 33.292 ms 216.239.49.28 (216.239.49.28) 64.723 ms 8 209.85.255.130 (209.85.255.130) 49.350 ms 209.85.255.126 (209.85.255.126) 49.619 ms 209.85.255.122 (209.85.255.122) 52.416 ms 9 google-public-dns-a.google.com (8.8.8.8) 41.266 ms 44.054 ms 44.730 ms If you have any suggestions, please comment or answer. Thanks in advance.

    Read the article

  • Find optimal strategy and AI for the game 'Proximity'?

    - by smci
    'Proximity' is a strategy game of territorial domination similar to Othello, Go and Risk. Two players, uses a 10x12 hex grid. Game invented by Brian Cable in 2007. Seems to be a worthy game for discussing a) optimal algorithm then b) how to build an AI. Strategies are going to be probabilistic or heuristic-based, due to the randomness factor, and the insane branching factor (20^120). So it will be kind of hard to compare objectively. A compute time limit of 5s per turn seems reasonable. Game: Flash version here and many copies elsewhere on the web Rules: here Object: to have control of the most armies after all tiles have been placed. Each turn you received a randomly numbered tile (value between 1 and 20 armies) to place on any vacant board space. If this tile is adjacent to any ally tiles, it will strengthen each tile's defenses +1 (up to a max value of 20). If it is adjacent to any enemy tiles, it will take control over them if its number is higher than the number on the enemy tile. Thoughts on strategy: Here are some initial thoughts; setting the computer AI to Expert will probably teach a lot: minimizing your perimeter seems to be a good strategy, to prevent flips and minimize worst-case damage like in Go, leaving holes inside your formation is lethal, only more so with the hex grid because you can lose armies on up to 6 squares in one move low-numbered tiles are a liability, so place them away from your main territory, near the board edges and scattered. You can also use low-numbered tiles to plug holes in your formation, or make small gains along the perimeter which the opponent will not tend to bother attacking. a triangle formation of three pieces is strong since they mutually reinforce, and also reduce the perimeter Each tile can be flipped at most 6 times, i.e. when its neighbor tiles are occupied. Control of a formation can flow back and forth. Sometimes you lose part of a formation and plug any holes to render that part of the board 'dead' and lock in your territory/ prevent further losses. Low-numbered tiles are obvious-but-low-valued liabilities, but high-numbered tiles can be bigger liabilities if they get flipped (which is harder). One lucky play with a 20-army tile can cause a swing of 200 (from +100 to -100 armies). So tile placement will have both offensive and defensive considerations. Comment 1,2,4 seem to resemble a minimax strategy where we minimize the maximum expected possible loss (modified by some probabilistic consideration of the value ß the opponent can get from 1..20 i.e. a structure which can only be flipped by a ß=20 tile is 'nearly impregnable'.) I'm not clear what the implications of comments 3,5,6 are for optimal strategy. Interested in comments from Go, Chess or Othello players. (The sequel ProximityHD for XBox Live, allows 4-player -cooperative or -competitive local multiplayer increases the branching factor since you now have 5 tiles in your hand at any given time, of which you can only play one. Reinforcement of ally tiles is increased to +2 per ally.)

    Read the article

  • How to implement an offline reader writer lock

    - by Peter Morris
    Some context for the question All objects in this question are persistent. All requests will be from a Silverlight client talking to an app server via a binary protocol (Hessian) and not WCF. Each user will have a session key (not an ASP.NET session) which will be a string, integer, or GUID (undecided so far). Some objects might take a long time to edit (30 or more minutes) so we have decided to use pessimistic offline locking. Pessimistic because having to reconcile conflicts would be far too annoying for users, offline because the client is not permanently connected to the server. Rather than storing session/object locking information in the object itself I have decided that any aggregate root that may have its instances locked should implement an interface ILockable public interface ILockable { Guid LockID { get; } } This LockID will be the identity of a "Lock" object which holds the information of which session is locking it. Now, if this were simple pessimistic locking I'd be able to achieve this very simply (using an incrementing version number on Lock to identify update conflicts), but what I actually need is ReaderWriter pessimistic offline locking. The reason is that some parts of the application will perform actions that read these complex structures. These include things like Reading a single structure to clone it. Reading multiple structures in order to create a binary file to "publish" the data to an external source. Read locks will be held for a very short period of time, typically less than a second, although in some circumstances they could be held for about 5 seconds at a guess. Write locks will mostly be held for a long time as they are mostly held by humans. There is a high probability of two users trying to edit the same aggregate at the same time, and a high probability of many users needing to temporarily read-lock at the same time too. I'm looking for suggestions as to how I might implement this. One additional point to make is that if I want to place a write lock and there are some read locks, I would like to "queue" the write lock so that no new read locks are placed. If the read locks are removed withing X seconds then the write lock is obtained, if not then the write lock backs off; no new read-locks would be placed while a write lock is queued. So far I have this idea The Lock object will have a version number (int) so I can detect multi-update conflicts, reload, try again. It will have a string[] for read locks A string to hold the session ID that has a write lock A string to hold the queued write lock Possibly a recursion counter to allow the same session to lock multiple times (for both read and write locks), but not sure about this yet. Rules: Can't place a read lock if there is a write lock or queued write lock. Can't place a write lock if there is a write lock or queued write lock. If there are no locks at all then a write lock may be placed. If there are read locks then a write lock will be queued instead of a full write lock placed. (If after X time the read locks are not gone the lock backs off, otherwise it is upgraded). Can't queue a write lock for a session that has a read lock. Can anyone see any problems? Suggest alternatives? Anything? I'd appreciate feedback before deciding on what approach to take.

    Read the article

  • How can arguments to variadic functions be passed by reference in PHP?

    - by outis
    Assuming it's possible, how would one pass arguments by reference to a variadic function without generating a warning in PHP? We can no longer use the '&' operator in a function call, otherwise I'd accept that (even though it would be error prone, should a coder forget it). What inspired this is are old MySQLi wrapper classes that I unearthed (these days, I'd just use PDO). The only difference between the wrappers and the MySQLi classes is the wrappers throw exceptions rather than returning FALSE. class DBException extends RuntimeException {} ... class MySQLi_throwing extends mysqli { ... function prepare($query) { $stmt = parent::prepare($query); if (!$stmt) { throw new DBException($this->error, $this->errno); } return new MySQLi_stmt_throwing($this, $query, $stmt); } } // I don't remember why I switched from extension to composition, but // it shouldn't matter for this question. class MySQLi_stmt_throwing /* extends MySQLi_stmt */ { protected $_link, $_query, $_delegate; public function __construct($link, $query, $prepared) { //parent::__construct($link, $query); $this->_link = $link; $this->_query = $query; $this->_delegate = $prepared; } function bind_param($name, &$var) { return $this->_delegate->bind_param($name, $var); } function __call($name, $args) { //$rslt = call_user_func_array(array($this, 'parent::' . $name), $args); $rslt = call_user_func_array(array($this->_delegate, $name), $args); if (False === $rslt) { throw new DBException($this->_link->error, $this->errno); } return $rslt; } } The difficulty lies in calling methods such as bind_result on the wrapper. Constant-arity functions (e.g. bind_param) can be explicitly defined, allowing for pass-by-reference. bind_result, however, needs all arguments to be pass-by-reference. If you call bind_result on an instance of MySQLi_stmt_throwing as-is, the arguments are passed by value and the binding won't take. try { $id = Null; $stmt = $db->prepare('SELECT id FROM tbl WHERE ...'); $stmt->execute() $stmt->bind_result($id); // $id is still null at this point ... } catch (DBException $exc) { ... } Since the above classes are no longer in use, this question is merely a matter of curiosity. Alternate approaches to the wrapper classes are not relevant. Defining a method with a bunch of arguments taking Null default values is not correct (what if you define 20 arguments, but the function is called with 21?). Answers don't even need to be written in terms of MySQL_stmt_throwing; it exists simply to provide a concrete example.

    Read the article

  • Which StackOverflow-style MarkDown (WMD) javascript editor should I use?

    - by Edan Maor
    Background I'm working on an application which requires user-entered content, and I've decided to use a StackOverflow-style MarkDown editor. After researching this topic for the last few days, I realize there are numerous forks of the base WMD editor, some with a few basic enhancements and some with serious differences from the StackOverflow one. Since this will be the heart of the application, I'd like to start with the best code base I can. I'd be happy if anyone can recommend which one of the many solutions out there best fits my needs. Below is requirements, plus what I've managed to find already. I'm hoping this question will help me decide which version to go with, and maybe help me discover a port out there that's an even better fit for my needs. The requirements for my project Live Preview Multiple editors on the same page (not know how many in advance, since the user can dynamically add another editing box). Ability to extend with extra buttons (I'd like a button to upload a picture, instead of just adding an img url). Ability to dynamically show/hide the edit box (and only see the preview box). Not an absolute must, but I'd prefer to stick as close to StackOverflow's look and feel, since it's well known. Don't know if this matters, but the backend is written in Django. Editors I've looked at Here are a few of the code bases I've looked at, with thoughts. Obviously, I might be missing another solution out there. The derobins version. From what I can tell, this is the official StackOverflow version. Seems like it doesn't support multiple editors on one page. JQuery.MarkEdit. Looks very good, but is pretty different from the StackOverflow version. MooWMD. Looks like the winner right now, but I'm a little concerned since it looks less active/hackable than MarkEdit. The wmd-new version. Not sure, looks like an old codebase without much use. The SocialSite branch. Seems like it's not for public use.

    Read the article

  • Two network interfaces and two IP addresses on the same subnet in Linux

    - by Scott Duckworth
    I recently ran into a situation where I needed two IP addresses on the same subnet assigned to one Linux host so that we could run two SSL/TLS sites. My first approach was to use IP aliasing, e.g. using eth0:0, eth0:1, etc, but our network admins have some fairly strict settings in place for security that squashed this idea: They use DHCP snooping and normally don't allow static IP addresses. Static addressing is accomplished by using static DHCP entries, so the same MAC address always gets the same IP assignment. This feature can be disabled per switchport if you ask and you have a reason for it (thankfully I have a good relationship with the network guys and this isn't hard to do). With the DHCP snooping disabled on the switchport, they had to put in a rule on the switch that said MAC address X is allowed to have IP address Y. Unfortunately this had the side effect of also saying that MAC address X is ONLY allowed to have IP address Y. IP aliasing required that MAC address X was assigned two IP addresses, so this didn't work. There may have been a way around these issues on the switch configuration, but in an attempt to preserve good relations with the network admins I tried to find another way. Having two network interfaces seemed like the next logical step. Thankfully this Linux system is a virtual machine, so I was able to easily add a second network interface (without rebooting, I might add - pretty cool). A few keystrokes later I had two network interfaces up and running and both pulled IP addresses from DHCP. But then the problem came in: the network admins could see (on the switch) the ARP entry for both interfaces, but only the first network interface that I brought up would respond to pings or any sort of TCP or UDP traffic. After lots of digging and poking, here's what I came up with. It seems to work, but it also seems to be a lot of work for something that seems like it should be simple. Any alternate ideas out there? Step 1: Enable ARP filtering on all interfaces: # sysctl -w net.ipv4.conf.all.arp_filter=1 # echo "net.ipv4.conf.all.arp_filter = 1" >> /etc/sysctl.conf From the file networking/ip-sysctl.txt in the Linux kernel docs: arp_filter - BOOLEAN 1 - Allows you to have multiple network interfaces on the same subnet, and have the ARPs for each interface be answered based on whether or not the kernel would route a packet from the ARP'd IP out that interface (therefore you must use source based routing for this to work). In other words it allows control of which cards (usually 1) will respond to an arp request. 0 - (default) The kernel can respond to arp requests with addresses from other interfaces. This may seem wrong but it usually makes sense, because it increases the chance of successful communication. IP addresses are owned by the complete host on Linux, not by particular interfaces. Only for more complex setups like load- balancing, does this behaviour cause problems. arp_filter for the interface will be enabled if at least one of conf/{all,interface}/arp_filter is set to TRUE, it will be disabled otherwise Step 2: Implement source-based routing I basically just followed directions from http://lartc.org/howto/lartc.rpdb.multiple-links.html, although that page was written with a different goal in mind (dealing with two ISPs). Assume that the subnet is 10.0.0.0/24, the gateway is 10.0.0.1, the IP address for eth0 is 10.0.0.100, and the IP address for eth1 is 10.0.0.101. Define two new routing tables named eth0 and eth1 in /etc/iproute2/rt_tables: ... top of file omitted ... 1 eth0 2 eth1 Define the routes for these two tables: # ip route add default via 10.0.0.1 table eth0 # ip route add default via 10.0.0.1 table eth1 # ip route add 10.0.0.0/24 dev eth0 src 10.0.0.100 table eth0 # ip route add 10.0.0.0/24 dev eth1 src 10.0.0.101 table eth1 Define the rules for when to use the new routing tables: # ip rule add from 10.0.0.100 table eth0 # ip rule add from 10.0.0.101 table eth1 The main routing table was already taken care of by DHCP (and it's not even clear that its strictly necessary in this case), but it basically equates to this: # ip route add default via 10.0.0.1 dev eth0 # ip route add 130.127.48.0/23 dev eth0 src 10.0.0.100 # ip route add 130.127.48.0/23 dev eth1 src 10.0.0.101 And voila! Everything seems to work just fine. Sending pings to both IP addresses works fine. Sending pings from this system to other systems and forcing the ping to use a specific interface works fine (ping -I eth0 10.0.0.1, ping -I eth1 10.0.0.1). And most importantly, all TCP and UDP traffic to/from either IP address works as expected. So again, my question is: is there a better way to do this? This seems like a lot of work for a seemingly simple problem.

    Read the article

  • Suggest the best options to me to design the dynamic web interface using PHP MYSQL and AJAX

    - by Krishna
    Hello, I am designing a web interface for a company. I am describing the company's profile: company is currently having 5 branches and planning to extend their branches all over the country. it is an insurance surveying company. they are dealing with 6 Categories in the insurance domain, vide .. Engineering Fire Marine Motor Miscellaneous Risk Inspection and branches named as b1, b2, b3, b4, b5 and Extending. and finally they have contract with 22 companies. For each claim they are assign a unique ID. like contractcompany/category/serialno Ex: take a contracted company names as xxx, sss, zzz. xxx/Engineering/001 sss/Engineering/001 . . . xxx/Enginnering/002 sss/Engineering/002 . . . xxx/Fire/001 sss/Fire/001 . . . xxx/Fire/002 . . . xxx/Fire/002 . . . and so on..... by this way they issue the unique ID for each claim. Finally what i want is developing the interface with PHP mysql and ajax auto generating the unique id for each claim. store full details of the claims with reference to unique id. show all claims in one page, and they can view by branch wise and category wise. send monthly Report (All claims they have given and status of claims) to contract companies. give access to contracted companies, but they can view only their respective claims. Each claim has its own documents. So they can be uploaded by own company users or administrator. these files are associated with unique ID. contracted companies can view files. Give access to branches to enter new claims and update old claims. Administrator can create, update and delete all the claims and their details. Only administrator can grant new users (own company branches / contracted companies) Finally the the panel is completely database driven. Could any body can help. Thanks in advance Kindly do the needful and oblige Thanks and Regards Krishna. P [email protected]

    Read the article

  • How to maximize http.sys file upload performance

    - by anelson
    I'm building a tool that transfers very large streaming data sets (possibly on the order of terabytes in a single stream; routinely in the tens of gigabytes) from one server to another. The client portion of the tool will read blocks from the source disk, and send them over the network. The server side will read these blocks off the network and write them to a file on the server disk. Right now I'm trying to decide which transport to use. Options are raw TCP, and HTTP. I really, REALLY want to be able to use HTTP. The HttpListener (or WCF if I want to go that route) make it easy to plug in to the HTTP Server API (http.sys), and I can get things like authentication and SSL for free. The problem right now is performance. I wrote a simple test harness that sends 128K blocks of NULL bytes using the BeginWrite/EndWrite async I/O idiom, with async BeginRead/EndRead on the server side. I've modified this test harness so I can do this with either HTTP PUT operations via HttpWebRequest/HttpListener, or plain old socket writes using TcpClient/TcpListener. To rule out issues with network cards or network pathways, both the client and server are on one machine and communicate over localhost. On my 12-core Windows 2008 R2 test server, the TCP version of this test harness can push bytes at 450MB/s, with minimal CPU usage. On the same box, the HTTP version of the test harness runs between 130MB/s and 200MB/s depending upon how I tweak it. In both cases CPU usage is low, and the vast majority of what CPU usage there is is kernel time, so I'm pretty sure my usage of C# and the .NET runtime is not the bottleneck. The box has two 6-core Xeon X5650 processors, 24GB of single-ranked DDR3 RAM, and is used exclusively by me for my own performance testing. I already know about HTTP client tweaks like ServicePointManager.MaxServicePointIdleTime, ServicePointManager.DefaultConnectionLimit, ServicePointManager.Expect100Continue, and HttpWebRequest.AllowWriteStreamBuffering. Does anyone have any ideas for how I can get HTTP.sys performance beyond 200MB/s? Has anyone seen it perform this well on any environment?

    Read the article

  • problem with reload data from table view after come back from another view

    - by user129677
    I have a problem in my application. Any help will be greatly appreciated. Basically it is from view A to view B, and then come back from view B. In the view A, it has dynamic data loaded in from the database, and display on the table view. In this page, it also has the edit button, not on the navigation bar. When user tabs the edit button, it goes to the view B, which shows the pick view. And user can make any changes in here. Once that is done, user tabs the back button on the navigation bar, it saves the changes into the NSUserDefaults, goes back to the view A by pop the view B. When coming back to the view A, it should get the new data from the UIUserDefaults, and it did. I user NSLog to print out to the console and it shows the correct data. Also it should invoke the viewWillAppear: method to get the new data for the table view, but it didn't. It even did not call the tableView:numberOfRowsInSection: method. I place a NSLog statement inside this method but didn't print out in the console. as the result, the view A still has the old data. the only way to get the new data in the view A is to stop and start the application. both view A and view B are the subclass of UIViewController, with UITableViewDelegate and UITableViewDataSource. here is my code in the view A : - (void)viewWillAppear:(BOOL)animated { NSLog(@"enter in Schedule2ViewController ..."); // load in data from database, and store into NSArray object //[self.theTableView reloadData]; [self.theTableView setNeedsDisplay]; //[self.theTableView setNeedsLayout]; } in here, the "theTableView" is a UITableView variable. And I try all three cases of "reloadData", "setNeedsDisplay", and "setNeedsLayout", but didn't seem to work. in the view B, here is the method corresponding to the back button on the navigation bar. - (void)viewDidLoad { UIBarButtonItem *saveButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemSave target:self action:@selector(savePreference)]; self.navigationItem.leftBarButtonItem = saveButton; [saveButton release]; } - (IBAction) savePreference { NSLog(@"save preference."); // save data into the NSUSerDefaults [self.navigationController popViewControllerAnimated:YES]; } Am I doing in the right way? Or is there anything that I missed? Many thanks.

    Read the article

  • Android: database reading problem throws exception

    - by Vamsi
    Hi, i am having this problem with the android database. I adopted the DBAdapter file the NotepadAdv3 example from the google android page. DBAdapter.java public class DBAdapter { private static final String TAG = "DBAdapter"; private static final String DATABASE_NAME = "PasswordDb"; private static final String DATABASE_TABLE = "myuserdata"; private static final String DATABASE_USERKEY = "myuserkey"; private static final int DATABASE_VERSION = 2; public static final String KEY_USERKEY = "userkey"; public static final String KEY_TITLE = "title"; public static final String KEY_DATA = "data"; public static final String KEY_ROWID = "_id"; private final Context mContext; private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; private static final String DB_CREATE_KEY = "create table " + DATABASE_USERKEY + " (" + "userkey text not null" +");"; private static final String DB_CREATE_DATA = "create table " + DATABASE_TABLE + " (" + "_id integer primary key autoincrement, " + "title text not null" + "data text" +");"; private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(DB_CREATE_KEY); db.execSQL(DB_CREATE_DATA); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { Log.w(TAG, "Upgrading database from version " + oldVersion + " to " + newVersion + ", which will destroy all old data"); db.execSQL("DROP TABLE IF EXISTS myuserkey"); db.execSQL("DROP TABLE IF EXISTS myuserdata"); onCreate(db); } } public DBAdapter(Context ctx) { this.mContext = ctx; } public DBAdapter Open() throws SQLException{ try { mDbHelper = new DatabaseHelper(mContext); } catch(Exception e){ Log.e(TAG, e.toString()); } mDb = mDbHelper.getWritableDatabase(); return this; } public void close(){ mDbHelper.close(); } public Long storeKey(String userKey){ ContentValues initialValues = new ContentValues(); initialValues.put(KEY_USERKEY, userKey); try { mDb.delete(DATABASE_USERKEY, "1=1", null); } catch(Exception e) { Log.e(TAG, e.toString()); } return mDb.insert(DATABASE_USERKEY, null, initialValues); } public String retrieveKey() { final Cursor c; try { c = mDb.query(DATABASE_USERKEY, new String[] { KEY_USERKEY}, null, null, null, null, null); }catch(Exception e){ Log.e(TAG, e.toString()); return ""; } if(c.moveToFirst()){ return c.getString(0); } else{ Log.d(TAG, "UserKey Empty"); } return ""; } //not including any function related to "myuserdata" table } Class1.java { mUserKey = mDbHelper.retrieveKey(); mDbHelper.storeKey(Key); } the error that i am receiving is from Log.e(TAG, e.toString()) in the methods retrieveKey() and storeKey() "no such table: myuserkey: , while compiling: SELECT userkey FROM myuserkey"

    Read the article

  • Convert C++Builder AnsiString to std::string via boost::lexical_cast

    - by David Klein
    For a school assignment I have to implement a project in C++ using Borland C++ Builder. As the VCL uses AnsiString for all GUI Components I have to convert all of my std::strings to AnsiString for the sake of displaying. std::string inp = "Hello world!"; AnsiString outp(inp.c_str()); works of course but is a bit tedious to write and code duplication I want to avoid. As we use Boost in other contexts I decided to provide some helper functions go get boost::lexical_cast to work with AnsiString. Here is my implementation so far: std::istream& operator>>(std::istream& istr, AnsiString& str) { istr.exceptions(std::ios::badbit | std::ios::failbit | std::ios::eofbit); std::string s; std::getline(istr,s); str = AnsiString(s.c_str()); return istr; } In the beginning I got Access Violation after Access Violation but since I added the .exceptions() stuff the picture gets clearer. When the conversion is performed I get the following Exception: ios_base::eofbit set [Runtime Error/std::ios_base::failure] Does anyone have an idea how to fix it and can explain why the error occurs? My C++ experience is very limited. The conversion routine the other way round would be: std::ostream& operator<<(std::ostream& ostr,const AnsiString& str) { ostr << (str.c_str()); return ostr; } Maybe someone will spot an error here too :) With best regards! Edit: At the moment I'm using the edited version of Jem, it works in the beginning. After a while of using the programm the Borland Codeguard mentions some pointer arithmetic in already freed regions. Any ideas how this could be related? The Codeguard log (I'm using the german version, translations marked with stars): ------------------------------------------ Fehler 00080. 0x104230 (r) (Thread 0x07A4): Zeigerarithmetik in freigegebenem Speicher: 0x0241A238-0x0241A258. **(pointer arithmetic in freed region)** | d:\program files\borland\bds\4.0\include\dinkumware\sstream Zeile 126: | { // not first growth, adjust pointers | _Seekhigh = _Seekhigh - _Mysb::eback() + _Ptr; |> _Mysb::setp(_Mysb::pbase() - _Mysb::eback() + _Ptr, | _Mysb::pptr() - _Mysb::eback() + _Ptr, _Ptr + _Newsize); | if (_Mystate & _Noread) Aufrufhierarchie: **(stack-trace)** 0x00411731(=FOSChampion.exe:0x01:010731) d:\program files\borland\bds\4.0\include\dinkumware\sstream#126 0x00411183(=FOSChampion.exe:0x01:010183) d:\program files\borland\bds\4.0\include\dinkumware\streambuf#465 0x0040933D(=FOSChampion.exe:0x01:00833D) d:\program files\borland\bds\4.0\include\dinkumware\streambuf#151 0x00405988(=FOSChampion.exe:0x01:004988) d:\program files\borland\bds\4.0\include\dinkumware\ostream#679 0x00405759(=FOSChampion.exe:0x01:004759) D:\Projekte\Schule\foschamp\src\Server\Ansistringkonverter.h#31 0x004080C9(=FOSChampion.exe:0x01:0070C9) D:\Projekte\Schule\foschamp\lib\boost_1_34_1\boost/lexical_cast.hpp#151 Objekt (0x0241A238) [Größe: 32 Byte] war erstellt mit new **(Object was created with new)** | d:\program files\borland\bds\4.0\include\dinkumware\xmemory Zeile 28: | _Ty _FARQ *_Allocate(_SIZT _Count, _Ty _FARQ *) | { // allocate storage for _Count elements of type _Ty |> return ((_Ty _FARQ *)::operator new(_Count * sizeof (_Ty))); | } | Aufrufhierarchie: **(stack-trace)** 0x0040ED90(=FOSChampion.exe:0x01:00DD90) d:\program files\borland\bds\4.0\include\dinkumware\xmemory#28 0x0040E194(=FOSChampion.exe:0x01:00D194) d:\program files\borland\bds\4.0\include\dinkumware\xmemory#143 0x004115CF(=FOSChampion.exe:0x01:0105CF) d:\program files\borland\bds\4.0\include\dinkumware\sstream#105 0x00411183(=FOSChampion.exe:0x01:010183) d:\program files\borland\bds\4.0\include\dinkumware\streambuf#465 0x0040933D(=FOSChampion.exe:0x01:00833D) d:\program files\borland\bds\4.0\include\dinkumware\streambuf#151 0x00405988(=FOSChampion.exe:0x01:004988) d:\program files\borland\bds\4.0\include\dinkumware\ostream#679 Objekt (0x0241A238) war Gelöscht mit delete **(Object was deleted with delete)** | d:\program files\borland\bds\4.0\include\dinkumware\xmemory Zeile 138: | void deallocate(pointer _Ptr, size_type) | { // deallocate object at _Ptr, ignore size |> ::operator delete(_Ptr); | } | Aufrufhierarchie: **(stack-trace)** 0x004044C6(=FOSChampion.exe:0x01:0034C6) d:\program files\borland\bds\4.0\include\dinkumware\xmemory#138 0x00411628(=FOSChampion.exe:0x01:010628) d:\program files\borland\bds\4.0\include\dinkumware\sstream#111 0x00411183(=FOSChampion.exe:0x01:010183) d:\program files\borland\bds\4.0\include\dinkumware\streambuf#465 0x0040933D(=FOSChampion.exe:0x01:00833D) d:\program files\borland\bds\4.0\include\dinkumware\streambuf#151 0x00405988(=FOSChampion.exe:0x01:004988) d:\program files\borland\bds\4.0\include\dinkumware\ostream#679 0x00405759(=FOSChampion.exe:0x01:004759) D:\Projekte\Schule\foschamp\src\Server\Ansistringkonverter.h#31 ------------------------------------------ Ansistringkonverter.h is the file with the posted operators and line 31 is: std::ostream& operator<<(std::ostream& ostr,const AnsiString& str) { ostr << (str.c_str()); **(31)** return ostr; } Thanks for your help :)

    Read the article

  • How can I connect to a mail server using SMTP over SSL using Python?

    - by jakecar
    Hello, So I have been having a hard time sending email from my school's email address. It is SSL and I could only find this code online by Matt Butcher that works with SSL: import smtplib, socket version = "1.00" all = ['SMTPSSLException', 'SMTP_SSL'] SSMTP_PORT = 465 class SMTPSSLException(smtplib.SMTPException): """Base class for exceptions resulting from SSL negotiation.""" class SMTP_SSL (smtplib.SMTP): """This class provides SSL access to an SMTP server. SMTP over SSL typical listens on port 465. Unlike StartTLS, SMTP over SSL makes an SSL connection before doing a helo/ehlo. All transactions, then, are done over an encrypted channel. This class is a simple subclass of the smtplib.SMTP class that comes with Python. It overrides the connect() method to use an SSL socket, and it overrides the starttles() function to throw an error (you can't do starttls within an SSL session). """ certfile = None keyfile = None def __init__(self, host='', port=0, local_hostname=None, keyfile=None, certfile=None): """Initialize a new SSL SMTP object. If specified, `host' is the name of the remote host to which this object will connect. If specified, `port' specifies the port (on `host') to which this object will connect. `local_hostname' is the name of the localhost. By default, the value of socket.getfqdn() is used. An SMTPConnectError is raised if the SMTP host does not respond correctly. An SMTPSSLError is raised if SSL negotiation fails. Warning: This object uses socket.ssl(), which does not do client-side verification of the server's cert. """ self.certfile = certfile self.keyfile = keyfile smtplib.SMTP.__init__(self, host, port, local_hostname) def connect(self, host='localhost', port=0): """Connect to an SMTP server using SSL. `host' is localhost by default. Port will be set to 465 (the default SSL SMTP port) if no port is specified. If the host name ends with a colon (`:') followed by a number, that suffix will be stripped off and the number interpreted as the port number to use. This will override the `port' parameter. Note: This method is automatically invoked by __init__, if a host is specified during instantiation. """ # MB: Most of this (Except for the socket connection code) is from # the SMTP.connect() method. I changed only the bare minimum for the # sake of compatibility. if not port and (host.find(':') == host.rfind(':')): i = host.rfind(':') if i >= 0: host, port = host[:i], host[i+1:] try: port = int(port) except ValueError: raise socket.error, "nonnumeric port" if not port: port = SSMTP_PORT if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) msg = "getaddrinfo returns an empty list" self.sock = None for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res try: self.sock = socket.socket(af, socktype, proto) if self.debuglevel > 0: print>>stderr, 'connect:', (host, port) self.sock.connect(sa) # MB: Make the SSL connection. sslobj = socket.ssl(self.sock, self.keyfile, self.certfile) except socket.error, msg: if self.debuglevel > 0: print>>stderr, 'connect fail:', (host, port) if self.sock: self.sock.close() self.sock = None continue break if not self.sock: raise socket.error, msg # MB: Now set up fake socket and fake file classes. # Thanks to the design of smtplib, this is all we need to do # to get SSL working with all other methods. self.sock = smtplib.SSLFakeSocket(self.sock, sslobj) self.file = smtplib.SSLFakeFile(sslobj); (code, msg) = self.getreply() if self.debuglevel > 0: print>>stderr, "connect:", msg return (code, msg) def setkeyfile(self, keyfile): """Set the absolute path to a file containing a private key. This method will only be effective if it is called before connect(). This key will be used to make the SSL connection.""" self.keyfile = keyfile def setcertfile(self, certfile): """Set the absolute path to a file containing a x.509 certificate. This method will only be effective if it is called before connect(). This certificate will be used to make the SSL connection.""" self.certfile = certfile def starttls(): """Raises an exception. You cannot do StartTLS inside of an ssl session. Calling starttls() will return an SMTPSSLException""" raise SMTPSSLException, "Cannot perform StartTLS within SSL session." And then my code: import ssmtplib conn = ssmtplib.SMTP_SSL('HOST') conn.login('USERNAME','PW') conn.ehlo() conn.sendmail('FROM_EMAIL', 'TO_EMAIL', "MESSAGE") conn.close() And got this error: /Users/Jake/Desktop/Beth's Program/ssmtplib.py:116: DeprecationWarning: socket.ssl() is deprecated. Use ssl.wrap_socket() instead. sslobj = socket.ssl(self.sock, self.keyfile, self.certfile) Traceback (most recent call last): File "emailer.py", line 5, in conn = ssmtplib.SMTP_SSL('HOST') File "/Users/Jake/Desktop/Beth's Program/ssmtplib.py", line 79, in init smtplib.SMTP.init(self, host, port, local_hostname) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/smtplib.py", line 239, in init (code, msg) = self.connect(host, port) File "/Users/Jake/Desktop/Beth's Program/ssmtplib.py", line 131, in connect self.sock = smtplib.SSLFakeSocket(self.sock, sslobj) AttributeError: 'module' object has no attribute 'SSLFakeSocket' Thank you!

    Read the article

  • Mac OS - Built SVN from source, now Apache2 not loading sites

    - by Geuis
    This relates to another question I asked earlier today. I built SVN 1.6.2 from source. In the process, it has completely screwed up my dev environment. After I built SVN, Apache wasn't loading. It was giving me this error: Syntax error on line 117 of /private/etc/apache2/httpd.conf: Cannot load /usr/libexec /apache2/mod_dav_svn.so into server: dlopen(/usr/libexec/apache2/mod_dav_svn.so, 10): no suitable image found. Did find:\n\t/usr/libexec/apache2/mod_dav_svn.so: mach-o, but wrong architecture It appears that SVN over-wrote the old mod_dav_svn.so and I am not able to get it to build as FAT, and I can't recover whatever was originally there. I resolved this(temporarily?) by commenting out the line that was loading the mod_dav_svn.so and got Apache to start at this point. However, even though Apache is running I am now getting this error when trying to access my dev sites: Directory index forbidden by Options directive: /usr/share/tomcat6/webapps/ROOT/ I have Apache2 sitting in front of Tomcat6. I access my local dev site using the internal name "http://localthesite". I have had virtual directories set up that have worked until this SVN debacle. Tomcat is installed at /usr/local/apache-tomcat, and webapps is /usr/local/apache-tomcat/webapps. Our production servers deploy tomcat to /usr/share/tomcat6, so I have symlinks setup on my system to replicate this as well. These point back to the actual installation path. This has all been working fine as well. None of our configurations for Apache2, Tomcat, or .htaccess have changed. Over the weekend, I performed a "Repair Disk Permissions" on the system. This was before I discovered the mod_dav_svn.so problem. I have been reading up on this all morning and the most common answer is that there is an Options -Indexes set. We have this in a config file, but it was there before and when I removed it during testing, I still got the same errors from Apache. At this point, I'm assuming I either totally borked the native Apache2 installation on this Mac, or that there is a permissions error somewhere that I'm missing. The permissions error could be from the SVN installation, or from my repair process. Does anyone have any idea what could be the problem? I'm totally blocked right now and have no idea where to check next.

    Read the article

  • Code Golf: Finite-state machine!

    - by Adam Matan
    Finite state machine A deterministic finite state machine is a simple computation model, widely used as an introduction to automata theory in basic CS courses. It is a simple model, equivalent to regular expression, which determines of a certain input string is Accepted or Rejected. Leaving some formalities aside, A run of a finite state machine is composed of: alphabet, a set of characters. states, usually visualized as circles. One of the states must be the start state. Some of the states might be accepting, usually visualized as double circles. transitions, usually visualized as directed arches between states, are directed links between states associated with an alphabet letter. input string, a list of alphabet characters. A run on the machine begins at the starting state. Each letter of the input string is read; If there is a transition between the current state and another state which corresponds to the letter, the current state is changed to the new state. After the last letter was read, if the current state is an accepting state, the input string is accepted. If the last state was not an accepting state, or a letter had no corresponding arch from a state during the run, the input string is rejected. Note: This short descruption is far from being a full, formal definition of a FSM; Wikipedia's fine article is a great introduction to the subject. Example For example, the following machine tells if a binary number, read from left to right, has an even number of 0s: The alphabet is the set {0,1}. The states are S1 and S2. The transitions are (S1, 0) -> S2, (S1, 1) -> S1, (S2, 0) -> S1 and (S2, 1) -> S2. The input string is any binary number, including an empty string. The rules: Implement a FSM in a language of your choice. Input The FSM should accept the following input: <States> List of state, separated by space mark. The first state in the list is the start state. Accepting states begin with a capital letter. <transitions> One or more lines. Each line is a three-tuple: origin state, letter, destination state) <input word> Zero or more characters, followed by a newline. For example, the aforementioned machine with 1001010 as an input string, would be written as: S1 s2 S1 0 s2 S1 1 S1 s2 0 S1 s2 1 s2 1001010 Output The FSM's run, written as <State> <letter> -> <state>, followed by the final state. The output for the example input would be: S1 1 -> S1 S1 0 -> s2 s2 0 -> S1 S1 1 -> S1 S1 0 -> s2 s2 1 -> s2 s2 0 -> S1 ACCEPT For the empty input '': S1 ACCEPT For 101: S1 1 -> S1 S1 0 -> s2 s2 1 -> s2 REJECT For '10X': S1 1 -> S1 S1 0 -> s2 s2 X REJECT Prize A nice bounty will be given to the most elegant and short solution. Reference implementation A reference Python implementation will be published soon.

    Read the article

  • Sql Server Compact 2005 on Visual Studio 2008

    - by Tim
    I'm working on a Windows Forms application that interacts with a Sql Compact database file created by SQL Server 2005. This application was originally developed in Visual Studio 2005 but was recently converted to a Visual Studio 2008 solution. In regards to Sql Compact, we made sure the references were all still set to the assemblies that handle the 2005 version of Sql Compact rather than Sql Compact 3.5. Having done this, the application still runs just as it should - it will still interact with the Compact database, perform synchronization operations, etc. However, I just discovered today that Visual Studio tools such as the DataSet Designer do not play well with a Sql Compact database file of an older version than 3.5. If I go to the New Connection... wizard, the only Sql Compact Data Source / Data Provider are for Sql Compact 3.5. I assume that Visual Studio 2008 just doesn't include the data provider for the older version of Sql Compact by default. Is there a way you can add the old version of Sql Compact to the list of "Data Sources" for the connection wizard? To see exactly what I'm referring to, click on the Tools menu of Visual Studio 2008 and click Connect to Database... In the window that comes up, click Change... next to the Data source setting. From this dialog there is no way I can select the earlier version of Sql Compact - only 3.5 is available. Maybe I need to add an assembly reference somewhere? Or copy some file(s) from my Visual Studio 2005 directory over to 2008? I would think there would have to be a way for Visual Studio 2008 to be able to interact with a Sql Compact database from Sql Server 2005. To provide one more bit of detail, I discovered this problem when I went to my DataSet, right-clicked and tried to add a TableAdapter. The first screen that comes up says, "Choose Your Data Connection". If I leave it set to the Sql Compact connection that we've always used, I now get the following error when clicking the Next button: Failed to open a connection to the database "The selected database was created with an earlier version of SQL Server Compact and needs to be upgraded to SQL Server Compact 3.5 before the connection can be opened or tested. Upgrade the database by creating a new data connection and completing the Add Connection dialog box." Check the connection and try again. The only problem here is that we still use Sql Server 2005, and if my understanding is correct, it does not produce subscription files that are compatible with Sql Compact 3.5. If I am wrong in this assumption, please correct me. Any help you can provide is greatly appreciated. Thank you.

    Read the article

  • Linux policy routing - packets not coming back

    - by Bugsik
    i am trying to set up policy routing on my home server. My network looks like this: Host routed VPN gateway Internet link through VPN 192.168.0.35/24 ---> 192.168.0.5/24 ---> 192.168.0.1 DSL router 10.200.2.235/22 .... .... 10.200.0.1 VPN server The traffic from 192.168.0.32/27 should be and is routed through VPN. I wanted to define some routing policies to route some traffic from 192.168.0.5 through VPN as well - for start - from user with uid 2000. Policy routing is done using iptables mark target and ip rule fwmark. The problem: When connecting using user 2000 from 192.168.0.5 tcpdump shows outgoing packets, but nothing comes back. Traffic from 192.168.0.35 works fine (here I am not using fwmark but src policy). Here is my VPN gateway setup: # uname -a Linux placebo 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:49:02 UTC 2012 i686 i686 i386 GNU/Linux # iptables -V iptables v1.4.12 # ip -V ip utility, iproute2-ss111117 IPtables rules (all policies in table filter are ACCEPT) # iptables -t mangle -nvL Chain PREROUTING (policy ACCEPT 770K packets, 314M bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 767K packets, 312M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 5520 packets, 1920K bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 782K packets, 901M bytes) pkts bytes target prot opt in out source destination 74 4707 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner UID match 2000 MARK set 0x3 Chain POSTROUTING (policy ACCEPT 788K packets, 903M bytes) pkts bytes target prot opt in out source destination # iptables -t nat -nvL Chain PREROUTING (policy ACCEPT 996 packets, 51172 bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 7 packets, 432 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 1364 packets, 112K bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 2302 packets, 160K bytes) pkts bytes target prot opt in out source destination 119 7588 MASQUERADE all -- * vpn 0.0.0.0/0 0.0.0.0/0 Routing: # ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master lan state UNKNOWN qlen 1000 link/ether 00:40:63:f9:c3:8f brd ff:ff:ff:ff:ff:ff valid_lft forever preferred_lft forever 3: lan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:40:63:f9:c3:8f brd ff:ff:ff:ff:ff:ff inet 192.168.0.5/24 brd 192.168.0.255 scope global lan inet6 fe80::240:63ff:fef9:c38f/64 scope link valid_lft forever preferred_lft forever 4: vpn: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 100 link/none inet 10.200.2.235/22 brd 10.200.3.255 scope global vpn # ip rule show 0: from all lookup local 32764: from all fwmark 0x3 lookup VPN 32765: from 192.168.0.32/27 lookup VPN 32766: from all lookup main 32767: from all lookup default # ip route show table VPN default via 10.200.0.1 dev vpn 10.200.0.0/22 dev vpn proto kernel scope link src 10.200.2.235 192.168.0.0/24 dev lan proto kernel scope link src 192.168.0.5 # ip route show default via 192.168.0.1 dev lan metric 100 10.200.0.0/22 dev vpn proto kernel scope link src 10.200.2.235 192.168.0.0/24 dev lan proto kernel scope link src 192.168.0.5 TCP dump showing no traffic coming back when connection is made from 192.168.0.5 user 2000 # tcpdump -i vpn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on vpn, link-type RAW (Raw IP), capture size 65535 bytes ### Traffic from user 2000 on 192.168.0.5 ### 10:19:05.629985 IP 10.200.2.235.37291 > 10.100-78-194.akamai.com.http: Flags [S], seq 2868799562, win 14600, options [mss 1460,sackOK,TS val 6887764 ecr 0,nop,wscale 4], length 0 10:19:21.678001 IP 10.200.2.235.37291 > 10.100-78-194.akamai.com.http: Flags [S], seq 2868799562, win 14600, options [mss 1460,sackOK,TS val 6891776 ecr 0,nop,wscale 4], length 0 ### Traffic from 192.168.0.35 ### 10:23:12.066174 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [S], seq 2294159276, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 557451322 ecr 0,sackOK,eol], length 0 10:23:12.265640 IP 10.100-78-194.akamai.com.http > 10.200.2.235.49247: Flags [S.], seq 2521908813, ack 2294159277, win 14480, options [mss 1367,sackOK,TS val 388565772 ecr 557451322,nop,wscale 1], length 0 10:23:12.276573 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [.], ack 1, win 8214, options [nop,nop,TS val 557451534 ecr 388565772], length 0 10:23:12.293030 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [P.], seq 1:480, ack 1, win 8214, options [nop,nop,TS val 557451552 ecr 388565772], length 479 10:23:12.574773 IP 10.100-78-194.akamai.com.http > 10.200.2.235.49247: Flags [.], ack 480, win 7776, options [nop,nop,TS val 388566081 ecr 557451552], length 0

    Read the article

  • Perl regex matching output from `w -hs` command

    - by Bushman
    I'm trying to write a Perl script that will work better with KDE's kwrited, which, as far as I can tell, is connected to a pts and puts every line it receives through the KDE system tray notifications, with the title "KDE write daemon". Unfortunately, it makes a separate notification for each and every line, so it spams up the system tray with multiline messages on regular old write, and for some reason it cuts off the entire last line of the message when using wall (One-line messages are also goners.). I was also hoping to make it so that it could broadcast across a LAN with thick clients. Before starting on that (which would require ssh, of course), I tried to make an ssh-less version to make sure it works. Unfortunately, it doesn't. perl ./write.pl "Testing 1 2 3" where the following is the contents of ./write.pl: #!/usr/bin/perl use strict; use warnings; my $message = ""; my $device = ""; my $possibledevice = '`w -hs | grep "/usr/bin/kwrited"`'; #Where is kwrited? $possibledevice =~ s/^[^\t][\t]//; $possibledevice =~ s/[\t][^\t][\t ]\/usr\/bin\/kwrited$//; $possibledevice = '/dev/'.$possibledevice; unless ($possibledevice eq "") { $device = $possibledevice; } if ($ARGV[0] ne "") { $message = $ARGV[0]; $device = $ARGV[1]; } else { $device = $ARGV[0] unless $ARGV[0] eq ""; while (<STDIN>) { chomp; $message .= <STDIN>; } } if ($message ne "") { system "echo \'$message\' > $device"; } else { print "Error: empty message" } produces the following error: $ perl write.pl "Testing 1 2 3" Use of uninitialized value $device in concatenation (.) or string at write.pl line 29. sh: -c: line 0: syntax error near unexpected token `newline' sh: -c: line 0: `echo 'foo' > ' Somehow, the regular expressions and/or the backtick escape in processing $possibledevice are not working properly, because where kwrited is connected to /dev/pts/0, the following works perfectly: $ perl write.pl "Testing 1 2 3" /dev/pts/0

    Read the article

  • WPF DataTemplates with VS2010 designer support + reusable - would you do it that way?

    - by Christian
    Ok, I am currently tidying up all my old stuff. I ran into the issue of "code only DataTemplates" - which are really a pain in the ass. You can't see anything, they are really hard to design, and I want to improve my project. So I had the idea to use the following solution. The main benefits are: You have designer support for your data template You can easily include example sample data The file naming is consistent and easy to remember The preview does not require an additional XAML wrapper (even with code only controls) I will try to explain and illustrate my solution using a few pictures. I am interested in feedback, especially if you can imagine a better way to do it. And, of course, if you see any maintenance or performance issues. Ok, lets start with a simple PreviewObject. I want to have some data in it, so I create a subclass which will automatically fill in some dummy data. Then I add a list to the control, and name this list. Afterwards I add a DataTemplate, this is the sole reason for the whole control (to be able to see and edit the DataTemplate in place): Now I use this control to get my DataTemplate, to use it in other places. To make this easier, I added some code in the code behind, see here: Now I want a control to show me a list of PreviewItems, so I created a "code-only" control which creates an instance of my service (or gets one using DI in real world) and fills its list box with it: To view the result of this work, I added this control inside the same named XAML, this is basically only to be able to see the final result: What I do not like in this solution: The need to create the last control in "code only". So I tried something different while writing this post. The following two screenshots illustrate the approach. I am creating an instance of the service inside the DataContext, and I am using bindings to supply the Itemssourc and the ItemTemplate. The reason for the strange "static property" is refactoring support. If I hardcode the path in the designer (e.g. using "Path = PreviewHistory") and I refactor the names (which happens quite often, early design phase) - I screw up my controls without realizing it. Does anyone has a better idea for this? I am using Resharper, btw. Thanks for any input, and sorry for the image overkill. Just easier to explain that way.. Chris

    Read the article

  • Any software for pattern-matching and -rewriting source code?

    - by Steven A. Lowe
    I have some old software (in a language that's not dead but is dead to me ;-)) that implements a basic pattern-matching and -rewriting system for source code. I am considering resurrecting this code, translating it into a modern language, and open-sourcing the project as a refactoring power-tool. Before I go much further, I want to know if anything like this exists already (my google-fu is fanning air on this tonight). Here's how it works: the pattern-matching part matches source-code patterns spanning multiple lines of code using a template with binding variables, the pattern-rewriting part uses a template to rewrite the matched code, inserting the contents of the bound variables from the matching template matching and rewriting templates are associated (1:1) by a simple (unconditional) rewrite rule the software operates on the abstract syntax tree (AST) of the input application, and outputs a modified AST which can then be regenerated into new source code for example, suppose we find a bunch of while-loops that really should be for-loops. The following template will match the while-loop pattern: Template oldLoopPtrn int @cnt@ = 0; while (@cnt@ < @max@) { … @body@ ++@cnt@; } End_Template while the following template will specify the output rewrite pattern: Template newLoopPtrn for(int @cnt@ = 0; @cnt@ < @max@; @cnt@++) { @body@ } End_Template and a simple rule to associate them Rule oldLoopPtrn --> newLoopPtrn so code that looks like this int i=0; while(i<arrlen) { printf("element %d: %f\n",i,arr[i]); ++i; } gets automatically rewritten to look like this for(int i = 0; i < arrlen; i++) { printf("element %d: %f\n",i,arr[i]); } The closest thing I've seen like this is some of the code-refactoring tools, but they seem to be geared towards interactive rewriting of selected snippets, not wholesale automated changes. I believe that this kind of tool could supercharge refactoring, and would work on multiple languages (even HTML/CSS). I also believe that converting and polishing the code base would be a huge project that I simply cannot do alone in any reasonable amount of time. So, anything like this out there already? If not, any obvious features (besides rewrite-rule conditions) to consider? EDIT: The one feature of this system that I like very much is that the template patterns are fairly obvious and easy to read because they're written in the same language as the target source code, not in some esoteric mutated regex/BNF format.

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • Eclipse Helios on OS X Snow Leopard crashes frequently when editing certain PHP files

    - by William
    I use Eclipse Helios (Eclipse Platform: 3.6.0.I20100608-0911, Eclipse IDE for PHP Developers: 1.3.0.20100617-0520) all the time on OS X (Snow Leopard), and it seems I only run into trouble whenever I'm editing a PHP file that's part of the WordPress blogging framework. When I move my cursor to a variable or function name, that often triggers the beach ball of death. I suspect Eclipse is trying to look up that variable/function and for some reason that causes an endless loop. Sometimes it's not just variables or functions. Just today I was trying to replace all occurrences of a quoted string. Every time I clicked "Replace All", the program would freeze immediately after the string was replaced and the text cursor was moved to the replaced position. I think the moving of the text cursor is important, because I got the same result when I searched for the string (thus moving the cursor), but NOT when I searched for a nonexistent string. I tried disabling everything in my preferences related to marked occurrences, hovering, code assistance, etc. Nothing helps. I use Eclipse for all my projects, and I find that it's only WordPress projects where this happens. Here's my eclipse.ini file: -startup ../../../plugins/org.eclipse.equinox.launcher_1.1.0.v20100507.jar --launcher.library ../../../plugins/org.eclipse.equinox.launcher.cocoa.macosx_1.1.0.v20100503 -product org.eclipse.epp.package.php.product --launcher.defaultAction openFile -showsplash org.eclipse.platform --launcher.XXMaxPermSize 512m --launcher.defaultAction openFile -vmargs -Dosgi.requiredJavaVersion=1.5 -XstartOnFirstThread -Dorg.eclipse.swt.internal.carbon.smallFonts -XX:PermSize=128m -XX:MaxPermSize=128m -XX:MaxGCPauseMillis=10 -XX:MaxHeapFreeRatio=70 -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:+CMSIncrementalPacing -XX:CompileThreshold=5 -Xms128m -Xmx512m -Xss2m -Xdock:icon=../Resources/Eclipse.icns -XstartOnFirstThread -Dorg.eclipse.swt.internal.carbon.smallFonts -framework ../../../plugins/org.eclipse.osgi.services_3.2.100.v20100503.jar I have 4GB of RAM, so I don't know if the problem is I'm underutilizing my resources. Here's what I see over and over in the error log: !ENTRY org.eclipse.jface 2 0 2011-01-16 16:26:21.533 !MESSAGE Keybinding conflicts occurred. They may interfere with normal accelerator operation. !SUBENTRY 1 org.eclipse.jface 2 0 2011-01-16 16:26:21.533 !MESSAGE A conflict occurred for ALT+COMMAND+Q P: Binding(ALT+COMMAND+Q P, ParameterizedCommand(Command(org.eclipse.ui.views.showView,Show View, Shows a particular view, Category(org.eclipse.ui.category.views,Views,Commands for opening views,true), org.eclipse.ui.handlers.ShowViewHandler@2a46d1, [Lorg.eclipse.ui.internal.commands.Parameter;@18f50c2,,true), [Lorg.eclipse.core.commands.Parameterization;@1ff1855), org.eclipse.ui.defaultAcceleratorConfiguration, org.eclipse.ui.contexts.window,,cocoa,system) Binding(ALT+COMMAND+Q P, ParameterizedCommand(Command(org.eclipse.ui.views.showView,Show View, Shows a particular view, Category(org.eclipse.ui.category.views,Views,Commands for opening views,true), org.eclipse.ui.handlers.ShowViewHandler@2a46d1, [Lorg.eclipse.ui.internal.commands.Parameter;@18f50c2,,true), [Lorg.eclipse.core.commands.Parameterization;@96b40c), org.eclipse.ui.defaultAcceleratorConfiguration, org.eclipse.ui.contexts.window,,cocoa,system) !ENTRY org.eclipse.core.net 1 0 2011-01-16 16:26:22.217 !MESSAGE System property http.proxyHost has been set to 127.0.0.1 by an external source. This value will be overwritten using the values from the preferences !ENTRY org.eclipse.core.net 1 0 2011-01-16 16:26:22.217 !MESSAGE System property http.proxyPort has been set to 8888 by an external source. This value will be overwritten using the values from the preferences !ENTRY org.eclipse.core.net 1 0 2011-01-16 16:26:22.218 !MESSAGE System property https.proxyHost has been set to 127.0.0.1 by an external source. This value will be overwritten using the values from the preferences !ENTRY org.eclipse.core.net 1 0 2011-01-16 16:26:22.219 !MESSAGE System property https.proxyPort has been set to 8888 by an external source. This value will be overwritten using the values from the preferences I did some experimenting with the particular script that's giving me trouble. It's a hybrid of HTML and PHP, so Eclipse has to do both HTML and PHP validation. I wondered if the HTML validation had something to do with it, so I created a new file, copied the contents over, and messed with the doctype element. I found that if I replaced the well-formed XHTML 1.0 Strict doctype element with a generic doctype (as such: <!DOCTYPE html>), then I did not crash the program just by moving the cursor around. I set all HTML validation rules to "Ignore", but it still didn't solve my problems. For now, I'm just going to echo the doctype using PHP instead of entering it literally. That seems to prevent crashes. I notice that when I move the cursor around the document, Eclipse displays the "xpath" to my current location at the bottom of the screen. Sometimes there's a delay while it figures out my current path. Perhaps when it's validating against the Strict doctype, it has problems quickly calculating the xpath as I move the cursor around? Maybe it has a stack overflow that causes it to crash.

    Read the article

  • Mapstraction: Changing an Icon's image URL after it has been added?

    - by Paul Owens
    I am trying to use marker.setIcon() to change a markers image. However it appears that although this changes the marker.iconUrl attribute the icon itself is using marker.proprietary_marker.$.icon.image to display the markers image - so the markers icon remains unchanged. Is there a way to dynamically change the marker.proprietary_marker.$.icon.image? Add a marker. Check the icon's image url and the proprietary icon's image - they're the same. Change the icon. Again check the Urls. Now the Icon Url has changed but the marker still shows the old image which is in the proprietary marker object. <head> <title>Map Test</title> <script src="http://maps.google.com/maps?file=api&v=2&key=Your-Google-API-Key" type="text/javascript"></script> <script src="mapstraction.js"></script> <script type="text/javascript"> var map; var marker; function getMap(){ map = new mxn.Mapstraction('myMap','google'); map.setCenterAndZoom(new mxn.LatLonPoint(45.559242,-122.636467), 15); } function addMarker(){ marker = new mxn.Marker(new mxn.LatLonPoint(45.559242, -122.636467)); marker.addData({infoBubble : "Text", label : "Label", marker : 4, icon: "http://mapscripting.com/examples/mashups/richter-high.png"}); map.addMarker(marker); } function changeIcon(){ marker.setIcon("http://assets1.mapufacture.com/images/markers/usgs_marker.png"); } function showIconURL(){ alert(marker.iconUrl); } function showProprietaryIconURL(){ alert(marker.proprietary_marker.$.icon.image); } </script> </head> <body onload="getMap()"> <div id="myMap" style="width:627px; height:412px;"></div> <div> <input type="button" value="add marker" OnClick="addMarker();"> <input type="button" value="change icon" OnClick="changeIcon();"> <input type="button" value="show icon URL" OnClick="showIconURL();"> <input type="button" value="show proprierty icon URL " OnClick="showProprietaryIconURL();"> </div> </body> </html>

    Read the article

  • Adding unique objects to Core Data

    - by absolut
    I'm working on an iPhone app that gets a number of objects from a database. I'd like to store these using Core Data, but I'm having problems with my relationships. A Detail contains any number of POIs (points of interest). When I fetch a set of POI's from the server, they contain a detail ID. In order to associate the POI with the Detail (by ID), my process is as follows: Query the ManagedObjectContext for the detailID. If that detail exists, add the poi to it. If it doesn't, create the detail (it has other properties that will be populated lazily). The problem with this is performance. Performing constant queries to Core Data is slow, to the point where adding a list of 150 POI's takes a minute thanks to the multiple relationships involved. In my old model, before Core Data (various NSDictionary cache objects) this process was super fast (look up a key in a dictionary, then create it if it doesn't exist) I have more relationships than just this one, but pretty much every one has to do this check (some are many to many, and they have a real problem). Does anyone have any suggestions for how I can help this? I could perform fewer queries (by searching for a number of different ID's), but I'm not sure how much this will help. Some code: POI *poi = [NSEntityDescription insertNewObjectForEntityForName:@"POI" inManagedObjectContext:[(AppDelegate*)[UIApplication sharedApplication].delegate managedObjectContext]]; poi.POIid = [attributeDict objectForKey:kAttributeID]; poi.detailId = [attributeDict objectForKey:kAttributeDetailID]; Detail *detail = [self findDetailForID:poi.POIid]; if(detail == nil) { detail = [NSEntityDescription insertNewObjectForEntityForName:@"Detail" inManagedObjectContext:[(AppDelegate*)[UIApplication sharedApplication].delegate managedObjectContext]]; detail.title = poi.POIid; detail.subtitle = @""; detail.detailType = [attributeDict objectForKey:kAttributeType]; } -(Detail*)findDetailForID:(NSString*)detailID { NSManagedObjectContext *moc = [[UIApplication sharedApplication].delegate managedObjectContext]; NSEntityDescription *entityDescription = [NSEntityDescription entityForName:@"Detail" inManagedObjectContext:moc]; NSFetchRequest *request = [[[NSFetchRequest alloc] init] autorelease]; [request setEntity:entityDescription]; NSPredicate *predicate = [NSPredicate predicateWithFormat: @"detailid == %@", detailID]; [request setPredicate:predicate]; NSLog(@"%@", [predicate description]); NSError *error; NSArray *array = [moc executeFetchRequest:request error:&error]; if (array == nil || [array count] != 1) { // Deal with error... return nil; } return [array objectAtIndex:0]; }

    Read the article

< Previous Page | 668 669 670 671 672 673 674 675 676 677 678 679  | Next Page >