Search Results

Search found 314 results on 13 pages for 'dice'.

Page 12/13 | < Previous Page | 8 9 10 11 12 13  | Next Page >

  • nginx, php-fpm, and multiple roots - how to properly try_files?

    - by Carson C.
    I have a server context which is rooted in a login application. The login application handles, well, logins, and then returns a redirect to "/app" on the same server if a login is successful. The application is rooted elsewhere, which is handled by the location block shown here: location ^~ /app { alias /usr/share/nginx/www/website.com/content/public; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/tmp/php5-fpm.sock; include fastcgi_params; } } This works just fine, however the $uri getting passed to PHP still contains /app, even though I am using alias rather than root. Because of this, the try_files directive fails to a 404 unless I link app -> ./ in /usr/share/nginx/www/website.com/content/public. It's obviously silly to have that link in there, and if that link ever gets lost, bam dead website without an obvious cause. The next thing I tried... Was to remove the try_files directive entirely. This allowed me to rm the app link in my /public folder, and PHP had no problem locating the file and executing it. I used that to dump my $_SERVER global from PHP, and found that "SCRIPT_FILENAME" => "/usr/share/nginx/www/website.com/content/public/index.php" when the browser URI is /app. This is exactly right. Based on my fastcgi_params below, this led me to beleive that try_files $request_filename =404; should work, but no dice. nginx still doesn't find the file, and returns 404. So for right now, it will only work without any try_files directive. PHP finds the file, whereas try_files could not. I understand this may be a PHP security risk. Can anyone indicate how to move forward? The nginx logs don't contain anything relating to the failed try_files attempt, as far as I can see. fastcgi_aparams fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $server_https;

    Read the article

  • HP MSR 30-10a Router - Route Traffic over Default Route

    - by SteadH
    We have a brand new HP MSR 30-10a Router. We have a fairly simple routing situation - we have two IP blocks, one which has a route out. We need things on the first block to go through the router, and out. I have an old Cisco 2801 router doing the job right now. For our example - IP Block 1: 50.203.110.232/29, Router interface on this block is 50.203.110.237, route out is 50.203.110.233. IP Block 2: 50.202.219.1/27, Router interface on this block at 50.202.219.20. I have a static route created for: 0.0.0.0 0.0.0.0 50.203.110.233 The router seems to understand this. When on the CLI via serial cable, I can ping 8.8.8.8 and hear responses from Google DNS. Woo hoo! The issue arrives when any client sits on the IP Block 2 side. I configured my client with a static IP of 50.202.219.15/27, default gateway 50.202.219.20. I can ping myself. I can ping the near side of the router (50.202.219.20), and I can ping the far side of the router (50.203.110.237. I cannot ping anything else in IP block 1, nor can I ping 8.8.8.8. Here is my configuration file: <HP>display current-configuration # version 5.20.106, Release 2507, Standard # sysname HP # domain default enable system # dar p2p signature-file flash:/p2p_default.mtd # port-security enable # undo ip http enable # password-recovery enable # vlan 1 # domain system access-limit disable state active idle-cut disable self-service-url disable # user-group system group-attribute allow-guest # local-user admin password cipher $c$3$40gC1cxf/wIJNa1ufFPJsjKAof+QP5aV authorization-attribute level 3 service-type telnet # cwmp undo cwmp enable # interface Aux0 async mode flow link-protocol ppp # interface Cellular0/0 async mode protocol link-protocol ppp # interface Ethernet0/0 port link-mode route ip address 50.203.110.237 255.255.255.248 # interface Ethernet0/1 port link-mode route ip address 50.202.219.20 255.255.255.224 # interface NULL0 # ip route-static 0.0.0.0 0.0.0.0 50.203.110.233 permanent # load xml-configuration # load tr069-configuration # user-interface tty 12 user-interface aux 0 user-interface vty 0 4 authentication-mode scheme # My guess right now is there is some sort of "permission" needed to use the default route. The manuals haven't turned up a lot in this area that don't make the situation much more complicated (but maybe it needs to be more complicated?) Background: we use HP switches, and I love the CLI. I bought HP thinking the command line interface would be similar, or at least speak the same language. Whoops! I'd be happy to provide more information or perform any additional tests. Thanks in advance! Update 1: The manual mentions routing rules. I hadn't previously added these (since our Cisco 2801 seems to route anything by default). I added: ip ip-prefix 1 permit 0.0.0.0 0 less-equal 32 alas, still no dice.

    Read the article

  • How to setup multiple Apache SSL sites using multiple IP addresses

    - by Jeff
    How do you setup a single Apache2 config to host multiple HTTPS sites each on their own IP address? There will also be multiple HTTP sites on just a single IP address. I do not want to use Server Name Indication (SNI) as described here, and I'm only concerned with the important top-level Apache directives. That is, I just need to know the skeleton of how my config should look. The basic setup looks like this: Hosted on 1.1.1.1:80 (HTTP) - example.com - example.net - example.org Hosted on 2.2.2.2:443 (HTTPS) - secure.com Hosted on 3.3.3.3:443 (HTTPS) - secure.net Hosted on 4.4.4.4:443 (HTTPS) - secure.org And here are the important config directives I have so far, which is the closest I've come to a working iteration, but still no dice. I know I'm close, just need a little push in the right direction. Listen 1.1.1.1:80 Listen 2.2.2.2:443 Listen 3.3.3.3:443 Listen 4.4.4.4:443 NameVirtualHost 1.1.1.1:80 NameVirtualHost 2.2.2.2:443 NameVirtualHost 3.3.3.3:443 NameVirtualHost 4.4.4.4:443 # HTTP VIRTUAL HOSTS: <VirtualHost 1.1.1.1:80> ServerName example.com DocumentRoot /home/foo/example.com </VirtualHost> <VirtualHost 1.1.1.1:80> ServerName example.net DocumentRoot /home/foo/example.net </VirtualHost> <VirtualHost 1.1.1.1:80> ServerName example.org DocumentRoot /home/foo/example.org </VirtualHost> # HTTPS VIRTUAL HOSTS: <VirtualHost 2.2.2.2:443> ServerName secure.com DocumentRoot /home/foo/secure.com SSLEngine on SSLCertificateFile /home/foo/ssl/secure.com.crt SSLCertificateKeyFile /home/foo/ssl/secure.com.key SSLCACertificateFile /home/foo/ssl/ca.txt </VirtualHost> <VirtualHost 3.3.3.3:443> ServerName secure.net DocumentRoot /home/foo/secure.net SSLEngine on SSLCertificateFile /home/foo/ssl/secure.net.crt SSLCertificateKeyFile /home/foo/ssl/secure.net.key SSLCACertificateFile /home/foo/ssl/ca.txt </VirtualHost> <VirtualHost 4.4.4.4:443> ServerName secure.org DocumentRoot /home/foo/secure.org SSLEngine on SSLCertificateFile /home/foo/ssl/secure.org.crt SSLCertificateKeyFile /home/foo/ssl/secure.org.key SSLCACertificateFile /home/foo/ssl/ca.txt </VirtualHost> For what it's worth, I prefer to have each of my SSL sites on their own IP instead of including one of them on the primary VHOST IP. Any links which show a standard setup would be more than welcome!

    Read the article

  • Coldfusion 8 Application Crashes Under Heavy Load

    - by KM01
    Hello, We have a CF8 app that runs for 20-25 minutes before crashing under heavy load ~ 1200 users. This load is generated by our load testing tool: 1200 users ramped up in 5 mins (approx behavior of our users), running for an hour. We have this app on Solaris 10, Apache 2, JRun 4 and Oracle 10g. Java version is 1.6. During the initial load tests, the thread dumps pointed to monitor deadlocks that pointed to sessions. "jrpp-173": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" "scheduler-1": waiting to lock monitor 0x026c3ce0 (object 0x6abe2f20, a coldfusion.monitor.memory.SessionMemoryMonitor$TopMemoryUsedSessions), which is held by "jrpp-167" "jrpp-167": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" We increased the number of sessions relative to the number of CPUs (48 simultaneous threads against 32 CPUs), and the deadlock went away. While varying the simultaneous threads helped a little bit in terms of response time, the CF server still tanked in 20-25 minutes during all of these tests. We ran more thread dumps, and saw a thread locking a monitor, for e.g.: "jrpp-475" prio=3 tid=0x02230800 nid=0x2c5 runnable [0x4397d000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.getEntry(HashMap.java:347) at java.util.HashMap.containsKey(HashMap.java:335) at java.util.HashSet.contains(HashSet.java:184) at coldfusion.monitor.memory.MemoryTracker.onAddObject(MemoryTracker.java:124) at coldfusion.monitor.memory.MemoryTrackerProxy.onReplaceValue(MemoryTrackerProxy.java:598) at coldfusion.monitor.memory.MemoryTrackerProxy.onPut(MemoryTrackerProxy.java:510) at coldfusion.util.CaseInsensitiveMap.put(CaseInsensitiveMap.java:250) at coldfusion.util.FastHashtable.put(FastHashtable.java:43) - locked <0x6f7e1a78> (a coldfusion.runtime.Struct) at coldfusion.runtime.CfJspPage._arrayset(CfJspPage.java:1027) at coldfusion.runtime.CfJspPage._arraySetAt(CfJspPage.java:2117) at cfvalidation2ecfc1052964961$funcSETUSERAUDITDATA.runFunction(/app/docs/apply/cfcs/validation.cfc:377) As you see in the last line above there were several references CFMs and CFCs, and the lines have "cflock" tags, which were scoped to the "application." We (the dev team) then changed them to be scoped to a "name". After more load tests, there is no locking going on and there no deadlocks, but now the application tanks in 7-10 minutes. We've gotten system, network and DB reports from the respective admins, and they are not being taxed; even watched the server stats with server monitor, top, prstat, ran sar reports, etc. So we believe it is an issue with the CF server or maybe the JVM. I am running out of ideas as to what else we can try. Disclaimer: I am not a CF developer or Admin. I am just running the load test, analyzing the reports, threads etc, and sharing the results with the dev and admin teams, and trying the next change, and so on. So far no dice. Has anyone run into something similar? How did you go about diagnosing and troubleshooting? All thoughts and pointers welcome. Thank you for your time! KM

    Read the article

  • DLINK WBR-1310B Wireless Router seems to hang...

    - by Ira Baxter
    I have a brand new DLINK-1310B Wireless Router (box never before opened, although I bought it at the neighborhood computer junk store). I am using it at home (and in fact am using it at this instant from a wireless laptop). When operative, I can ping it at 192.168.0.1, and I can log into it from the PC attached to it by LAN and from the wireless PC at //192.168.0.1. In the course of the day since I've installed, it seems to have locked up 3 times. Each time the symptoms are my web browser (or other IP service, e.g., POP3) stops with a "No internet connection" error. Attempts to contact the router via 192.168.0.1 get no reaction, from either the wireless laptop or from the hardwired PC sitting next to it. It doesn't respond to pings to that address either. Power cycling the router fixes it. I've seen discussion in other questions about aging cheap electronics. Its too new to be aged. Anybody else seen this behavior with a DLINK-1310? Or do I just need to exchange it for another and try again? (I hate rolling dice, I bought the DLINK because a previous Linksys died of apparant heating problems, how many do I have to cycle through before I get something that works and is long-term stable?). Remarkably, nobody talks about how much software is in a router. Is the stuff just buggy? EDIT: Happened again, while I was working on the wireless Vista laptop. (Seems like once an hour?) I was a little more careful this time. The wireless laptop can ping it. It can't get the login screen. I visited the LAN-connected PC (takes me a minute to walk from the laptop to the PC at the other end of the house), and attempted to visit a random web page. Surprise, that worked! And, now, after a minute walking back to the laptop, I can reconnect the wireless laptop, and get to the login page from it. Strange the time/date has been reset back to 2002. (I'll swear I set it and saved the system configuration after updating the firmware; it made me redo every other bit of reconfiguration again). Is there something funny about wireless leases expiring? The router says the leases it is handing out are good for 180 minutes, and the delay-to-inaccessible was only about an hour. The DSL connection seems to have a 10 minute lease.

    Read the article

  • Rails /tmp/cache/assets permissions issue using Debian virtual machine hosted on OS X Lion

    - by Jim
    I am running Parallels Desktop 7 on OS X Lion. I have a VM with Debian installed, and inside that VM I setup a Rails development environment. I am using Parallels Tools to share out my OS X home directory to the VM - the goal here is to run the Rails server on the VM, but host the files on OS X (so they are automatically backed up, and so I can use tools like Textmate to develop with). Everything seems to work with the shared directory - my Debian user can read, write, and execute files. However, when I cloned a recent Rails project from Git, I got an error message when it tried to compile the CSS assets. My symptoms are exactly the same as in the question: http://stackoverflow.com/questions/7556774/rails-sprocket-error-compiling-css-assest-chown-issue I believe this is permissions-based, but it is really weird. My entire Rails project directory has permissions set to 777 and my Debian user owns it. If I navigate into /tmp/cache/assets, those permissions are the same. However, the three-character directories Rails is creating (DCE, DA1, D05, etc...) are being created without write permissions! If I refresh the Rails page a few times, about 4 or 5 (with Rails creating new three-character directories every time), eventually it will create one of the directories with the proper 777 permissions and everything will work! This will persist until I make a change to the CSS files and it has to recompile. Does anyone have any idea what might be going on here? I can't fathom why it is creating temp directories with incorrect permissions, or why after a few refreshes the good permissions kick in and it works... It definitely seems to be an issue with the share, since if I move the project into a different directory on the VM, it seems to work fine. On the OS X side, I've given the shared folder 777 permissions as well, but no dice...any ideas? Update I've found that the number of times I need to refresh before it works is not random - it has to do with how many assets are being compiled. For example, if I edit one of my CSS files, and there are four CSS files in the app/assets/stylesheets directory, I have to refresh four times before the app will finally work without the operation not permitted error...

    Read the article

  • Why isn't passwordless ssh working?

    - by Nelson
    I have two Ubuntu Server machines sitting at home. One is 192.168.1.15 (we'll call this 15), and the other is 192.168.1.25 (we'll call this 25). For some reason, when I want to setup passwordless login from 15 to 25, it works like a champ. When I repeat the steps on 25, so that 25 can login without a password on 15, no dice. I have checked both sshd_config files. Both have: RSAAuthentication yes PubkeyAuthentication yes I have checked permissions on both servers: drwx------ 2 bion2 bion2 4096 Dec 4 12:51 .ssh -rw------- 1 bion2 bion2 398 Dec 4 13:10 authorized_keys On 25. drwx------ 2 shimdidly shimdidly 4096 Dec 4 19:15 .ssh -rw------- 1 shimdidly shimdidly 1018 Dec 4 18:54 authorized_keys On 15. I just don't understand when things would work one way and not the other. I know it's probably something obvious just staring me in the face, but for the life of me, I can't figure out what is going on. Here's what ssh -v says when I try to ssh from 25 to 15: ssh -v -p 51337 192.168.1.15 OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 192.168.1.15 [192.168.1.15] port 51337. debug1: Connection established. debug1: identity file /home/shimdidly/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/shimdidly/.ssh/id_rsa-cert type -1 debug1: identity file /home/shimdidly/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: identity file /home/shimdidly/.ssh/id_dsa-cert type -1 debug1: identity file /home/shimdidly/.ssh/id_ecdsa type -1 debug1: identity file /home/shimdidly/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA 54:5c:60:80:74:ab:ab:31:36:a1:d3:9b:db:31:2a:ee debug1: Host '[192.168.1.15]:51337' is known and matches the ECDSA host key. debug1: Found key in /home/shimdidly/.ssh/known_hosts:2 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/shimdidly/.ssh/id_rsa debug1: Authentications that can continue: publickey,password debug1: Offering DSA public key: /home/shimdidly/.ssh/id_dsa debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/shimdidly/.ssh/id_ecdsa debug1: Next authentication method: password

    Read the article

  • SQL SERVER – SSMS: Top Object and Batch Execution Statistics Reports

    - by Pinal Dave
    The month of June till mid of July has been the fever of sports. First, it was Wimbledon Tennis and then the Soccer fever was all over. There is a huge number of fan followers and it is great to see the level at which people sometimes worship these sports. Being an Indian, I cannot forget to mention the India tour of England later part of July. Following these sports and as the events unfold to the finals, there are a number of ways the statisticians can slice and dice the numbers. Cue from soccer I can surely say there is a team performance against another team and then there is individual member fairs against a particular opponent. Such statistics give us a fair idea to how a team in the past or in the recent past has fared against each other, head-to-head stats during World cup and during other neutral venue games. All these statistics are just pointers. In reality, they don’t reflect the calibre of the current team because the individuals who performed in each of these games are totally different (Typical example being the Brazil Vs Germany semi-final match in FIFA 2014). So at times these numbers are misleading. It is worth investigating and get the next level information. Similar to these statistics, SQL Server Management studio is also equipped with a number of reports like a) Object Execution Statistics report and b) Batch Execution Statistics reports. As discussed in the example, the team scorecard is like the Batch Execution statistics and individual stats is like Object Level statistics. The analogy can be taken only this far, trust me there is no correlation between SQL Server functioning and playing sports – It is like I think about diet all the time except while I am eating. Performance – Batch Execution Statistics Let us view the first report which can be invoked from Server Node -> Reports -> Standard Reports -> Performance – Batch Execution Statistics. Most of the values that are displayed in this report come from the DMVs sys.dm_exec_query_stats and sys.dm_exec_sql_text(sql_handle). This report contains 3 distinctive sections as outline below.   Section 1: This is a graphical bar graph representation of Average CPU Time, Average Logical reads and Average Logical Writes for individual batches. The Batch numbers are indicative and the details of individual batch is available in section 3 (detailed below). Section 2: This represents a Pie chart of all the batches by Total CPU Time (%) and Total Logical IO (%) by batches. This graphical representation tells us which batch consumed the highest CPU and IO since the server started, provided plan is available in the cache. Section 3: This is the section where we can find the SQL statements associated with each of the batch Numbers. This also gives us the details of Average CPU / Average Logical Reads and Average Logical Writes in the system for the given batch with object details. Expanding the rows, I will also get the # Executions and # Plans Generated for each of the queries. Performance – Object Execution Statistics The second report worth a look is Object Execution statistics. This is a similar report as the previous but turned on its head by SQL Server Objects. The report has 3 areas to look as above. Section 1 gives the Average CPU, Average IO bar charts for specific objects. The section 2 is a graphical representation of Total CPU by objects and Total Logical IO by objects. The final section details the various objects in detail with the Avg. CPU, IO and other details which are self-explanatory. At a high-level both the reports are based on queries on two DMVs (sys.dm_exec_query_stats and sys.dm_exec_sql_text) and it builds values based on calculations using columns in them: SELECT * FROM    sys.dm_exec_query_stats s1 CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 WHERE   s2.objectid IS NOT NULL AND DB_NAME(s2.dbid) IS NOT NULL ORDER BY  s1.sql_handle; This is one of the simplest form of reports and in future blogs we will look at more complex reports. I truly hope that these reports can give DBAs and developers a hint about what is the possible performance tuning area. As a closing point I must emphasize that all above reports pick up data from the plan cache. If a particular query has consumed a lot of resources earlier, but plan is not available in the cache, none of the above reports would show that bad query. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • NHibernate.MappingException: No persister for:

    - by Sara Chipps
    Now, before you say it I DID google and my hbm.xml file IS an Embedded Resource. Here is the code I am calling: ISession session = GetCurrentSession(); var returnObject = session.Get<T>(Id); Here is my mapping file for the class: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <class name="HQData.Objects.SubCategory, HQData" table="SubCategory" lazy="true"> <id name="ID" column="ID" unsaved-value="0"> <generator class="identity" /> </id> <property name="Name" column="Name" /> <property name="NumberOfBuckets" column="NumberOfBuckets" /> <property name="SearchCriteriaOne" column="SearchCriteriaOne" /> <bag name="_Businesses" cascade="all"> <key column="SubCategoryId"/> <one-to-many class="HQData.Objects.Business, HQData"/> </bag> <bag name="_Buckets" cascade="all"> <key column="SubCategoryId"/> <one-to-many class="HQData.Objects.Bucket, HQData"/> </bag> </class> </hibernate-mapping> Has anyone run to this issue before? I swore that was it after I read it, but no dice. Here is the rest of the error and thanks for your help. MappingException: No persister for: HQData.Objects.SubCategory]NHibernate.Impl.SessionFactoryImpl.GetEntityPersister(String entityName, Boolean throwIfNotFound) in c:\CSharp\NH2.0.0\nhibernate\src\NHibernate\Impl\SessionFactoryImpl.cs:766 NHibernate.Impl.SessionFactoryImpl.GetEntityPersister(String entityName) in c:\CSharp\NH2.0.0\nhibernate\src\NHibernate\Impl\SessionFactoryImpl.cs:752 NHibernate.Event.Default.DefaultLoadEventListener.OnLoad(LoadEvent event, LoadType loadType) in c:\CSharp\NH2.0.0\nhibernate\src\NHibernate\Event\Default\DefaultLoadEventListener.cs:37 NHibernate.Impl.SessionImpl.FireLoad(LoadEvent event, LoadType loadType) in c:\CSharp\NH2.0.0\nhibernate\src\NHibernate\Impl\SessionImpl.cs:2054 NHibernate.Impl.SessionImpl.Get(String entityName, Object id) in c:\CSharp\NH2.0.0\nhibernate\src\NHibernate\Impl\SessionImpl.cs:1029 NHibernate.Impl.SessionImpl.Get(Type entityClass, Object id) in c:\CSharp\NH2.0.0\nhibernate\src\NHibernate\Impl\SessionImpl.cs:1020 NHibernate.Impl.SessionImpl.Get(Object id) in c:\CSharp\NH2.0.0\nhibernate\src\NHibernate\Impl\SessionImpl.cs:985 HQData.DataAccessUtils.NHibernateObjectHelper.LoadDataObject(Int32 Id) in C:\Development\HQChannelRepo\HQ Channel Application\HQChannel\HQData\DataAccessUtils\NHibernateObjectHelper.cs:42 HQWebsite.LocalSearch.get_subCategory() in C:\Development\HQChannelRepo\HQ Channel Application\HQChannel\HQWebsite\LocalSearch.aspx.cs:17 HQWebsite.LocalSearch.Page_Load(Object sender, EventArgs e) in C:\Development\HQChannelRepo\HQ Channel Application\HQChannel\HQWebsite\LocalSearch.aspx.cs:27 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +15 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +33 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +47 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1436 I had changed some code and I wasn't adding the Assembly to the config file during runtime. Thanks for your help This has been fixed, I am not F-ing with my NHibernate setup ever again!

    Read the article

  • Autoconf (newbie) -- building with static library

    - by EB
    I am trying to migrate from manual build to autoconf, which is working very nicely so far. But I have one static library that I can't figure out how to integrate. That library will NOT be located in the usual library locations - the location of the binary (.a file) and header (.h file) will be given as a configure argument. (Notably, even if I move the .a file to /usr/lib or anywhere else I can think of, it still won't work.) Manual compilation is working with these: gcc ... -I/path/to/header/file/directory /full/path/to/the/.a/file/itself (Uh, I actually don't understand why the .a file is referenced directly, not with -L or anything. Yes, I have a half-baked understanding of building C programs.) I can use the configure argument to successfully find the header (.h file) using AC_CHECK_HEADER. Inside the AC_CHECK_HEADER I then add the location to CPFLAGS and the #include of the header file in the actual C code picks it up nicely. Given a configure argument that has been put into $location and the name of the needed files are myprog.h and myprog.a (which are both in the same directory), here is what works so far: AC_CHECK_HEADER([$location/myprog.h], [AC_DEFINE([HAVE_MYPROG_H], [1], [found myprog.h]) CFLAGS="$CFLAGS -I$location"]) Where I run into difficulties is getting the binary (.a file) linked in. No matter what I try, I always get an error about undefined references to the function calls for that library. I'm pretty sure it's a linkage issue, because I can fuss with the C code and make an intentional error in the function calls to that library which produces earlier errors that indicate that the function prototypes have been loaded and used to compile. I tried adding the location that contains the .a file to LDFLAGS and then doing a AC_CHECK_LIB but it is not found. Maybe my syntax is wrong, or maybe I'm missing something more fundamental, which would not be surprising since I'm a newbie and don't really know what I'm doing. Here is what I have tried: AC_CHECK_HEADER([$location/myprog.h], [AC_DEFINE([HAVE_MYPROG_H], [1], [found myprog.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location"; AC_CHECK_LIB(myprog)]) No dice. AC_CHECK_LIB is looking for -lmyprog I guess (or libmyprog?) so I'm not sure if that's a problem, so I tried this, too (omit AC_CHECK_LIB and include the .a directly in LDFLAGS), without luck: AC_CHECK_HEADER([$location/myprog.h], [AC_DEFINE([HAVE_MYPROG_H], [1], [found myprog.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location/myprog.a"]) To emulate the manual compilation, I tried removing the -L but that doesn't help: AC_CHECK_HEADER([$location/myprog.h], [AC_DEFINE([HAVE_MYPROG_H], [1], [found myprog.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS $location/myprog.a"]) I tried other combinations and permutations, but I think I might be missing something more fundamental....

    Read the article

  • Custom NSView in NSMenuItem not receiving mouse events

    - by Dennis
    I have an NSMenu popping out of an NSStatusItem using popUpStatusItemMenu. These NSMenuItems show a bunch of different links, and each one is connected with setAction: to the openLink: method of a target. This arrangement has been working fine for a long time. The user chooses a link from the menu and the openLink: method then deals with it. Unfortunately, I recently decided to experiment with using NSMenuItem's setView: method to provide a nicer/slicker interface. Basically, I just stopped setting the title, created the NSMenuItem, and then used setView: to display a custom view. This works perfectly, the menu items look great and my custom view is displayed. However, when the user chooses a menu item and releases the mouse, the action no longer works (i.e., openLink: isn't called). If I just simply comment out the setView: call, then the actions work again (of course, the menu items are blank, but the action is executed properly). My first question, then, is why setting a view breaks the NSMenuItem's action. No problem, I thought, I'll fix it by detecting the mouseUp event in my custom view and calling my action method from there. I added this method to my custom view: - (void)mouseUp:(NSEvent *)theEvent { NSLog(@"in mouseUp"); } No dice! This method is never called. I can set tracking rects and receive mouseEntered: events, though. I put a few tests in my mouseEntered routine, as follows: if ([[self window] ignoresMouseEvents]) { NSLog(@"ignoring mouse events"); } else { NSLog(@"not ignoring mouse events"); } if ([[self window] canBecomeKeyWindow]) { dNSLog((@"canBecomeKeyWindow")); } else { NSLog(@"not canBecomeKeyWindow"); } if ([[self window] isKeyWindow]) { dNSLog((@"isKeyWindow")); } else { NSLog(@"not isKeyWindow"); } And got the following responses: not ignoring mouse events canBecomeKeyWindow not isKeyWindow Is this the problem? "not isKeyWindow"? Presumably this isn't good because Apple's docs say "If the user clicks a view that isn’t in the key window, by default the window is brought forward and made key, but the mouse event is not dispatched." But there must be a way do detect these events. HOW? Adding: [[self window] makeKeyWindow]; has no effect, despite the fact that canBecomeKeyWindow is YES.

    Read the article

  • Query Results Not Expected

    - by E-Madd
    I've been a CF developer for 15 years and I've never run into anything this strange or frustrating. I've pulled my hair out for hours, googled, abstracted, simplified, prayed and done it all in reverse. Can you help me? A cffunction takes one string argument and from that string I build an array of "phrases" to run a query with, attempting to match a location name in my database. For example, the string "the republic of boulder" would produce the array: ["the","republic","of","boulder","the republic","the republic of","the republic of boulder","republic of","republic of boulder","of boulder"]. Another cffunction uses the aforementioned cffunction and runs a cfquery. A query based on the previously given example would be... select locationid, locationname, locationaliasname from vwLocationsWithAlias where LocationName in ('the','the republic','the republic of','republic','republic of','republic of boulder','of','of boulder','boulder') or LocationAliasName in ('the','the republic','the republic of','republic','republic of','republic of boulder','of','of boulder','boulder') This returns 2 records... locationid - locationname - locationalias 99 - 'Boulder' - 'the republic' 68 - 'Boulder' - NULL This is good. Works fine and dandy. HOWEVER... if the string is changed to "the republic", resulting in the phrases array ["the","republic","the republic"] which is then used to produce the query... select locationid, locationname, locationaliasname from vwLocationsWithAlias where LocationName in ('the','the republic','republic') or LocationAliasName in ('the','the republic','republic') This returns 0 records. Say what?! OK, just to make sure I'm not involuntarily HIGH I run that very same query in my SQL console against the same database in the cf datasource. 1 RECORD! locationid - locationname - locationalias 99 - 'Boulder' - 'the republic' I can even hard-code that sql within the same cffunction and get that one result, but never from the dynamically generated SQL. I can get my location phrases from another cffunction of a different name that returns hard-coded array values and those work, but never if the array is dynamically built. I've tried removing cfqueryparams, triple-checking my datatypes, datasource setups, etc., etc., etc. NO DICE WTF!? Is this an obscure bug? Am I losing my mind? I've tried everything I can think of and others (including Ray Camden) can think of. ColdFusion 8 (with all the latest hotfixes) SQL Server 2005 (with all the greatest service packs) Windows 2003 Server (with all the latest updates, service packs and nightly MS voodoo)

    Read the article

  • Classes to Entities; Like-class inheritence problems

    - by Stacey
    Beyond work, some friends and I are trying to build a game of sorts; The way we structure some of it works pretty well for a normal object oriented approach, but as most developers will attest this does not always translate itself well into a database persistent approach. This is not the absolute layout of what we have, it is just a sample model given for sake of representation. The whole project is being done in C# 4.0, and we have every intention of using Entity Framework 4.0 (unless Fluent nHibernate can really offer us something we outright cannot do in EF). One of the problems we keep running across is inheriting things in database models. Using the Entity Framework designer, I can draw the same code I have below; but I'm sure it is pretty obvious that it doesn't work like it is expected to. To clarify a little bit; 'Items' have bonuses, which can be of anything. Therefore, every part of the game must derive from something similar so that no matter what is 'changed' it is all at a basic enough level to be hooked into. Sounds fairly simple and straightforward, right? So then, we inherit everything that pertains to the game from 'Unit'. Weights, Measures, Random (think like dice, maybe?), and there will be other such entities. Some of them are similar, but in code they will each react differently. We're having a really big problem with abstracting this kind of thing into a database model. Without 'Enum' support, it is proving difficult to translate into multiple tables that still share a common listing. One solution we've depicted is to use a 'key ring' type approach, where everything that attaches to a character is stored on a 'Ring' with a 'Key', where each Key has a Value that represents a type. This works functionally but we've discovered it becomes very sluggish and performs poorly. We also dislike this approach because it begins to feel as if everything is 'dumped' into one class; which makes management and logical structure difficult to adhere to. I was hoping someone else might have some ideas on what I could do with this problem. It's really driving me up the wall; To summarize; the goal is to build a type (Unit) that can be used as a base type (Table per Type) for generic reference across a relatively global scope, without having to dump everything into a single collection. I can use an Interface to determine actual behavior so that isn't too big of an issue. This is 'roughly' the same idea expressed in the Entity Framework.

    Read the article

  • Autoconf -- including a static library (newbie)

    - by EB
    I am trying to migrate my application from manual build to autoconf, which is working very nicely so far. But I have one static library that I can't figure out how to integrate. That library will NOT be located in the usual library locations - the location of the binary (.a file) and header (.h file) will be given as a configure argument. (Notably, even if I move the .a file to /usr/lib or anywhere else I can think of, it still won't work.) It is also not named traditionally (it does not start with "lib" or "l"). Manual compilation is working with these (directory is not predictable - this is just an example): gcc ... -I/home/john/mystuff /home/john/mystuff/helper.a (Uh, I actually don't understand why the .a file is referenced directly, not with -L or anything. Yes, I have a half-baked understanding of building C programs.) So, in my configure.ac, I can use the relevant configure argument to successfully find the header (.h file) using AC_CHECK_HEADER. Inside the AC_CHECK_HEADER I then add the location to CPFLAGS and the #include of the header file in the actual C code picks it up nicely. Given a configure argument that has been put into $location and the name of the needed files are helper.h and helper.a (which are both in the same directory), here is what works so far: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"]) Where I run into difficulties is getting the binary (.a file) linked in. No matter what I try, I always get an error about undefined references to the function calls for that library. I'm pretty sure it's a linkage issue, because I can fuss with the C code and make an intentional error in the function calls to that library which produces earlier errors that indicate that the function prototypes have been loaded and used to compile. I tried adding the location that contains the .a file to LDFLAGS and then doing a AC_CHECK_LIB but it is not found. Maybe my syntax is wrong, or maybe I'm missing something more fundamental, which would not be surprising since I'm a newbie and don't really know what I'm doing. Here is what I have tried: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location"; AC_CHECK_LIB(helper)]) No dice. AC_CHECK_LIB is looking for -lhelper I guess (or libhelper?) so I'm not sure if that's a problem, so I tried this, too (omit AC_CHECK_LIB and include the .a directly in LDFLAGS), without luck: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location/helper.a"]) To emulate the manual compilation, I tried removing the -L but that doesn't help: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS $location/helper.a"]) I tried other combinations and permutations, but I think I might be missing something more fundamental....

    Read the article

  • Autoconf -- building with static library (newbie)

    - by EB
    I am trying to migrate my application from manual build to autoconf, which is working very nicely so far. But I have one static library that I can't figure out how to integrate. That library will NOT be located in the usual library locations - the location of the binary (.a file) and header (.h file) will be given as a configure argument. (Notably, even if I move the .a file to /usr/lib or anywhere else I can think of, it still won't work.) It is also not named traditionally (it does not start with "lib" or "l"). Manual compilation is working with these (directory is not predictable - this is just an example): gcc ... -I/home/john/mystuff /home/john/mystuff/helper.a (Uh, I actually don't understand why the .a file is referenced directly, not with -L or anything. Yes, I have a half-baked understanding of building C programs.) So, in my configure.ac, I can use the relevant configure argument to successfully find the header (.h file) using AC_CHECK_HEADER. Inside the AC_CHECK_HEADER I then add the location to CPFLAGS and the #include of the header file in the actual C code picks it up nicely. Given a configure argument that has been put into $location and the name of the needed files are helper.h and helper.a (which are both in the same directory), here is what works so far: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"]) Where I run into difficulties is getting the binary (.a file) linked in. No matter what I try, I always get an error about undefined references to the function calls for that library. I'm pretty sure it's a linkage issue, because I can fuss with the C code and make an intentional error in the function calls to that library which produces earlier errors that indicate that the function prototypes have been loaded and used to compile. I tried adding the location that contains the .a file to LDFLAGS and then doing a AC_CHECK_LIB but it is not found. Maybe my syntax is wrong, or maybe I'm missing something more fundamental, which would not be surprising since I'm a newbie and don't really know what I'm doing. Here is what I have tried: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location"; AC_CHECK_LIB(helper)]) No dice. AC_CHECK_LIB is looking for -lhelper I guess (or libhelper?) so I'm not sure if that's a problem, so I tried this, too (omit AC_CHECK_LIB and include the .a directly in LDFLAGS), without luck: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location/helper.a"]) To emulate the manual compilation, I tried removing the -L but that doesn't help: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS $location/helper.a"]) I tried other combinations and permutations, but I think I might be missing something more fundamental....

    Read the article

  • Is there a way to split a widescreen monitor in to two or more virtual monitors?

    - by Mike Thompson
    Like most developers I have grown to love dual monitors. I won't go into all the reasons for their goodness; just take it as a given. However, they are not perfect. You can never seem to line them up "just right". You always end up with the monitors at slight funny angles. And of course the bezel always gets in the way. And this is with identical monitors. The problem is much worse with different monitors -- VMWare's multi monitor feature won't even work with monitors of differnt resolutions. When you use multiple monnitors, one of them becomes your primary monitor of focus. Your focus may flip from one monitor to the other, but at any point in time you are usually focusing on only one monitor. There are exceptions to this (WinDiff, Excel), but this is generally the case. I suggest that having a single large monitor with all the benefits of multiple smaller monitors would be a better solution. Wide screen monitors are fantastic, but it is hard to use all the space efficiently. If you are writing code you are generally working on the left-hand side of the window. If you maximize an editor on a wide-screen monitor the right-hand side of the window will be a sea of white. Programs like WinSplit Revolution will help to organise your windows, but this is really just addressing the symptom, not the problem. Even with WinSplit Revolution, when you maximise a window it will take up the whole screen. You can't lock a window into a specific section of the screen. This is where virtual monitors comes in. What would be really nice is a video driver that sits on top of the existing driver, but allows a single monitor to be virtualised into multiple monitors. Control Panel would see your single physical monitor as two or more virtual monitors. The software could even support a virtual bezel to emphasise what is happening, or you could opt for seamless mode. Programs like WinSplit Revolution and UltraMon would still work. This virtual video driver would allow you to slice & dice your physical monitor into as many virtual monitors as you want. Does anybody know if such software exists? If not, are there any budding Windows display driver guru's out there willing to take up the challenge? I am not after the myriad of virtual desktop/window manager programs that are available. I get frustrated with these programs. They seem good at first but they usually have some strange behaviour and don't work well with other programs (such as WinSplit Revolution). I want the real thing!

    Read the article

  • How to pass an int value on UIButton click inside UITableView

    - by Toran Billups
    So I have a view controller with a single UITableView that shows a string to the user. But I needed a way to have the user select this object by id using a normal UIButton so I added it as a subview and the click event works as expected. The issue is this - how can I pass an int value to this click event? I've tried using attributes of the button like button.tag or button.accessibilityhint without much success. How do the professionals pass an int value like this? Also it's important that I can get the actual [x intValue] from the int later in the process. A while back I thought I could do this with a simple NSLog(@"the actual selected hat was %d", [obj.idValue intValue]); But this doesn't appear to work (any idea how I can pull the actual int value directly from the variable?). The working code I have is below - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } if ([self.hats count] > 0) { Hat* obj = [self.hats objectAtIndex: [indexPath row]]; NSMutableString* fullName = [[NSMutableString alloc] init]; [fullName appendFormat:@"%@", obj.name]; [fullName appendFormat:@" %@", obj.type]; cell.textLabel.text = fullName; cell.accessoryType = UITableViewCellAccessoryDetailDisclosureButton; //add the button to subview hack UIButton* button = [UIButton buttonWithType:UIButtonTypeRoundedRect]; button.frame = CGRectMake(50, 5, 50, 25); [button setTitle:@"Select" forState:UIControlStateNormal]; button.backgroundColor = [UIColor clearColor]; button.adjustsImageWhenHighlighted = YES; button.tag = obj.idValue; //this is my current hack to add the id but no dice :( [button addTarget:self action:@selector(action:) forControlEvents:UIControlEventTouchUpInside]; [cell.contentView addSubview:button]; //hack } return cell; } - (void)action:(id)sender { int* hatId = ((UIButton *)sender).tag; //try to pull the id value I added above ... //do something w/ the hatId }

    Read the article

  • PPTP connection fails with errors 800/806

    - by Mark S. Rasmussen
    I've got a client (Server 2008 R2) that won't connect to our production environment PPTP VPN server (Server 2003, running RRAS). The server is behind a firewall that has TCP1723 open as well as GRE. Other clients at our office are able to connect just fine. Our office is behind a Juniper SSG5-Serial firewall, but all outgoing traffic is allowed, and multiple other clients are able to connect to VPN servers without issues. I've also setup a completely different VPN server on another network outside of our office. The functioning clients connect just fine - the Server 2008 R2 machine doesn't. Thus it's definitely a problem with this machine in particular. I've rebooted it. I've disabled the firewall, no dice on either. I've run PPTPSRV and PPTPCLNT on the server/client and they're able to communicate perfectly - indicating there's no problem using neither TCP1723 nor GRE. The Server 2008 R2 machine is also running as a VPN server itself (incoming connection) and that's working perfectly. We have the issues no matter if there are active incoming connections or not. I'm not sure what my next debugging step would be; any suggestions? EDIT: The event log on the server has the following warning from RasMan: A connection between the VPN server and the VPN client xxx.xxx.xxx.xxx has been established, but the VPN connection cannot be completed. The most common cause for this is that a firewall or router between the VPN server and the VPN client is not configured to allow Generic Routing Encapsulation (GRE) packets (protocol 47). Verify that the firewalls and routers between your VPN server and the Internet allow GRE packets. Make sure the firewalls and routers on the user's network are also configured to allow GRE packets. If the problem persists, have the user contact the Internet service provider (ISP) to determine whether the ISP might be blocking GRE packets. Obviously this points to GRE being a potential problem. But seeing as I have other clients connectiong without problems, as well as PPTPSRV and PPTPCLNT being able to communicate, I'm suspecting this might be a red herring. EDIT: Here are the anonymized events logged by the client in chronological order: CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY has started dialing a VPN connection using a per-user connection profile named ZZZ. The connection settings are: Dial-in User = XXX\YYY VpnStrategy = PPTP DataEncryption = Require PrerequisiteEntry = AutoLogon = No UseRasCredentials = Yes Authentication Type = CHAP/MS-CHAPv2 Ipv4DefaultGateway = No Ipv4AddressAssignment = By Server Ipv4DNSServerAssignment = By Server Ipv6DefaultGateway = Yes Ipv6AddressAssignment = By Server Ipv6DNSServerAssignment = By Server IpDnsFlags = Register primary domain suffix IpNBTEnabled = Yes UseFlags = Private Connection ConnectOnWinlogon = No. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY is trying to establish a link to the Remote Access Server for the connection named ZZZ using the following device: Server address/Phone Number = XXX.YYY.ZZZ.KKK Device = WAN Miniport (PPTP) Port = VPN3-4 MediaType = VPN. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY has successfully established a link to the Remote Access Server using the following device: Server address/Phone Number = XXX.YYY.ZZZ.KKK Device = WAN Miniport (PPTP) Port = VPN3-4 MediaType = VPN. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The link to the Remote Access Server has been established by user XXX\YYY. CoId={742CB15C-A7E0-47B7-8240-0EFA1139CBD9}: The user XXX\YYY dialed a connection named ZZZ which has failed. The error code returned on failure is 806. Running Wireshark on the client shows it trying and retrying to send a "71 Configuration Request" While the server shows the incoming client requests, but apparently without replying: Given that this is GRE traffic, I think rules out the GRE traffic being blocked. Question is, why doesn't the server reply? This is the Configuration Request the server receives from the non functioning client (meaning no response is sent to the client request): And this is the Configuration Request the server receives from the working client: To me they seem identical, except for differing keys and magic numbers, and the fact that one client receives a response while the other doesn't.

    Read the article

  • NTFS Permissions - Access Denied even though Explicit Allow and no Deny

    - by chris613
    I'm hoping someone can help me with this NTFS permissions problem. The short version is that I can't write a new file in F:\SomeDir even though I seem to be granted full permissions via both the "Domain Admins" group and a second unprivileged group. The "Effective Permissions" tab in the explorer permissions UI shows that I have full control, and there are no "Deny"s anywhere in the ACL or anything else that looks unusual. I am logged into the machine over RDP and accessing the disk directly, not through a share. F:\SomeDir>set U USERDNSDOMAIN=THEOFFICE.LOCAL USERDOMAIN=THEOFFICE USERNAME=thisisme USERPROFILE=C:\Users\thisisme F:\SomeDir>icacls . . BUILTIN\Administrators:(I)(F) CREATOR OWNER:(I)(OI)(CI)(IO)(F) THEOFFICE\Domain Admins:(I)(OI)(CI)(F) NT AUTHORITY\SYSTEM:(I)(OI)(CI)(F) BUILTIN\Administrators:(I)(OI)(CI)(IO)(F) BUILTIN\Users:(I)(OI)(CI)(RX) Successfully processed 1 files; Failed processing 0 files F:\SomeDir>net group /domain "Domain Admins" The request will be processed at a domain controller for domain THEOFFICE.local. Group name Domain Admins Comment Designated administrators of the domain Members ------------------------------------------------------------------------------- Administrator thatguy thisisme The command completed successfully. F:\SomeDir>echo "whyUNoCreateFile?" > whyUNoCreateFile.txt Access is denied. I searched for answers and came across similar problems that lead to UAC (ex. Why does removing the EVERYONE group prevent domain admins from accessing a drive? ). I can't turn off UAC at the moment, so I try a "regular" group that I'm also part of. This group has no special rights assignments and is not part of any administrative groups. Still no dice: [***** This one command executed in an elevated shell *****] F:\SomeDir>icacls . /grant THEOFFICE\iteveryone:(OI)(CI)F processed file: . Successfully processed 1 files; Failed processing 0 files F:\SomeDir>net group /domain "iteveryone" The request will be processed at a domain controller for domain THEOFFICE.local. Group name ITeveryone Comment Members ------------------------------------------------------------------------------- Administrator thatguy thisisme otherguy someitguy The command completed successfully. F:\ScanningVMsForIBM>echo y > u Access is denied. As you can see, using a "regular" group didn't help. I have logged out and back in to the server to ensure my login token is up to date, and at any rate I belonged to these groups before the server was created. If I grant explicit permission to myself, it does allow me to write files: [***** This one command executed in an elevated shell *****] F:\SomeDir>icacls . /grant THEOFFICE\thisisme:(OI)(CI)F processed file: . Successfully processed 1 files; Failed processing 0 files F:\SomeDir>echo y > u F:\SomeDir>type u y My requirement is for the "Domain Admins" group to have Full Control, or if that's not possible without disabling UAC, then a second group will do, but I can't get either to work. I'm really stumped. Can someone please point out what I could be overlooking?

    Read the article

  • Launchd item no longer firing in Snow Leopard

    - by ridogi
    A launchd item that was working in 10.5 is no longer working after my upgrade to 10.6. I am running 10.6.2 and I have recreated the launchd item and given it a new name and that one doesn't run either. I have found a link of people with the same problem on google groups but none of the advice in that link helps. My launchd item is not listed in /private/var/db/launchd.db/com.apple.launchd/overrides.plist or in any of the overrides.plist files in the subdirectories of /private/var/db/launchd.db/ I have also tried to set this up as both a user agent and a user daemon. My launchd item simply runs a shell script, which I have no problem launching manually. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.eric.tmnotify.launchd</string> <key>ProgramArguments</key> <array> <string>/<path_to>/tmnotify.sh</string> </array> <key>StartInterval</key> <integer>3600</integer> </dict> </plist> I have tried to load it by overriding the disabled key (even though it is not disabled in any of the overrides.plist files) with both: sudo launchctl load -F /Users/eric/Library/LaunchAgents/com.eric.tmnotify.launchd.plist sudo launchctl load -w /Users/eric/Library/LaunchAgents/com.eric.tmnotify.launchd.plist and after running either of them I can see that it is running by using sudo launchctl list but the shell script never fires. Edit: I have also put this in the formerly blank file at /private/var/db/launchd.db/com.apple.launchd.peruser.501/overrides.plist : <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>com.eric.tmnotify.launchd</key> <dict> <key>Disabled</key> <false/> </dict> </dict> </plist> I also tried inserting this alphabetically: <key>com.eric.tmnotify.launchd</key> <dict> <key>Disabled</key> <false/> </dict> into the file /private/var/db/launchd.db/com.apple.launchd/overrides.plist but still no dice.

    Read the article

  • Capistrano + Nginx + Passenger = 403

    - by slimchrisp
    I asked this over at stackoverflow as well, but still haven't received any answers that have helped me to solve this problem. I have spent almost a week at this point trying to solve the issue, and I'm just not making any headway. It seems that this issue is pretty common, but none of the solutions I found online work for me. A buddy of mine is actually creating the same setup, and he is having the same issue. After a few days stuck with the 403 error I started over using this tutorial: http://blog.ninjahideout.com/posts/a-guide-to-a-nginx-passenger-and-rvm-server I had hoped starting from scratch using this tutorial would work, but no dice. Either way, if you view the tutorial you can see what steps I have taken. Here is essentially what I have going on. I have a VPS account on linode.com Server OS is Ubuntu 10.04 Local OS (shouldn't matter, but just so you know) used to deploy with Capistrano is Snow Leopard 10.6.6 I use RVM on the server. Version is 1.2.2 I was previously on ruby-1.9.2-p0 [ i386 ], but per the tutorial listed above I switched to ree-1.8.7-2010.02 [ i386 ]. Running 'which ruby' from the command line verifies that I am using 1.8.7 with the following output: /usr/local/rvm/rubies/ree-1.8.7-2010.02/bin/ruby passenger -v prints the following: Phusion Passenger version 3.0.2 Running 'nginx -v' gives me a message that the command nginx could not be found. The server is definitely there and running as I can use nginx to serve static files, but this could have something to do with my problem. I have two users dealing with the install. root which I used to install everything, and deployer which is a user I created specifically to for deploying my applications My web app directory is in the deployer user's home directory as follows: /home/deployer/webapps/mysite.com/public Per Capistrano default deploy, a symbolic link called current is created in the public folder, and points to /home/deployer/webapps/mysite.com/public/releases/most_current_release I have chmodded the deployer directory recursively to 777 /opt/nginx permissions: rwxr-xr-x /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2 permissions: rwxrwsrwx My nginx config file has gone through just short of eternity variations, but currently looks like this: ================================================================================== worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2; passenger_ruby /usr/local/rvm/bin/passenger_ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { # listen *:80; server_name mysite.com www.mysite.com; root /home/deployer/webapps/mysite.com/public/current; passenger_enabled on; passenger_friendly_error_pages on; access_log logs/mysite.com/server.log; error_log logs/mysite.com/error.log info; error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } ================================================================================== I bounce nginx, hit the site, and boom. 403, and logs say directory index of /home/deployer... is forbidden As others with a similar problem have said, you can drop an index.html into the public/releases/current_release and it will render. But rails no worky. That's basically it. At this point I have just about completely exhausted every possible solution attempt I can think of. I am a programmer and definitely not a sysadmin, so I am 99% sure this has something to do with permissions that I have hosed, but for the life of me I just can't figure out where. If anyone can help I would really really appreciate it. If there's any specific permission things you want me to check (ie groups/permissions), can you please include the commands to do so as well. Hopefully this will help others in the future who read this post. Let me know if there is any other information I can provide, and thanks in advance!!!

    Read the article

  • Amazon Web Services (AWS) Plug-in for Oracle Enterprise Manager

    - by Anand Akela
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Contributed by Sunil Kunisetty and Daniel Chan Introduction and ArchitectureAs more and more enterprises deploy some of their non-critical workload on Amazon Web Services (AWS), it’s becoming critical to monitor those public AWS resources along side with their on-premise resources. Oracle recently announced Oracle Enterprise Manager Plug-in for Amazon Web Services (AWS) allows you to achieve that goal. The on-premise Oracle Enterprise Manager (EM12c) acts as a single tool to get a comprehensive view of your public AWS resources as well as your private cloud resources.  By deploying the plug-in within your Cloud Control environment, you gain the following management features: Monitor EBS, EC2 and RDS instances on Amazon Web Services Gather performance metrics and configuration details for AWS instances Raise alerts and violations based on thresholds set on monitoring Generate reports based on the gathered data Users of this Plug-in can leverage the rich Enterprise Manager features such as system promotion, incident generation based on thresholds, integration with 3rd party ticketing applications etc. AWS Monitoring via this Plug-in is enabled via Amazon CloudWatch API and the users of this Plug-in are responsible for supplying credentials for accessing AWS and the CloudWatch API. This Plug-in can only be deployed on an EM12C R2 platform and agent version should be at minimum 12c R2.Here is a pictorial view of the overall architecture: Amazon Elastic Block Store (EBS) Amazon Elastic Compute Cloud (EC2) Amazon Relational Database Service (RDS) Here are a few key features: Rich and exhaustive list of metrics. Metrics can be gathered from an Agent running outside AWS. Critical configuration information. Custom Home Pages with charts and AWS configuration information. Generate incidents based on thresholds set on monitoring data. Discovery and Monitoring AWS instances can be added to EM12C either via the EM12c User Interface (UI) or the EM12c Command Line Interface ( EMCLI)  by providing the AWS credentials (Secret Key and Access Key Id) as well as resource specific properties as target properties. Here is a quick mapping of target types and properties for each AWS resources AWS Resource Type Target Type Resource specific properties EBS Resource Amazon EBS Service CloudWatch base URI, EC2 Base URI, Period, Volume Id, Proxy Server and Port EC2 Resource Amazon EC2 Service CloudWatch base URI, EC2 Base URI, Period, Instance  Id, Proxy Server and Port RDS Resource Amazon RDS Service CloudWatch base URI, RDS Base URI, Period, Instance  Id, Proxy Server and Port Proxy server and port are optional and are only needed if the agent is within the firewall. Here is an emcli example to add an EC2 target. Please read the Installation and Readme guide for more details and step-by-step instructions to deploy  the plugin and adding the AWS the instances. ./emcli add_target \       -name="<target name>" \       -type="AmazonEC2Service" \       -host="<host>" \       -properties="ProxyHost=<proxy server>;ProxyPort=<proxy port>;EC2_BaseURI=http://ec2.<region>.amazonaws.com;BaseURI=http://monitoring.<region>.amazonaws.com;InstanceId=<EC2 instance Id>;Period=<data point periond>"  \     -subseparator=properties="=" ./emcli set_monitoring_credential \                 -set_name="AWSKeyCredentialSet"  \                 -target_name="<target name>"  \                 -target_type="AmazonEC2Service" \                 -cred_type="AWSKeyCredential"  \                 -attributes="AccessKeyId:<access key id>;SecretKey:<secret key>" Emcli utility is found under the ORACLE_HOME of EM12C install. Once the instance is discovered, the target will show up under the ‘All Targets’ list under “Amazon EC2 Service’. Once the instances are added, one can navigate to the custom homepages for these resource types. The custom home pages not only include critical metrics, but also vital configuration parameters and incidents raised for these instances.  By mapping the configuration parameters as instance properties, we can slice-and-dice and group various AWS instance by leveraging the EM12C Config search feature. The following configuration properties and metrics are collected for these Resource types. Resource Type Configuration Properties Metrics EBS Resource Volume Id, Volume Type, Device Name, Size, Availability Zone Response: Status Utilization: QueueLength, IdleTime Volume Statistics: ReadBrandwith, WriteBandwidth, ReadThroughput, WriteThroughput Operation Statistics: ReadSize, WriteSize, ReadLatency, WriteLatency EC2 Resource Instance ID, Owner Id, Root Device type, Instance Type. Availability Zone Response: Status CPU Utilization: CPU Utilization Disk I/O:  DiskReadBytes, DiskWriteBytes, DiskReadOps, DiskWriteOps, DiskReadRate, DiskWriteRate, DiskIOThroughput, DiskReadOpsRate, DiskWriteOpsRate, DiskOperationThroughput Network I/O : NetworkIn, NetworkOut, NetworkInRate, NetworkOutRate, NetworkThroughput RDS Resource Instance ID, Database Engine Name, Database Engine Version, Database Instance Class, Allocated Storage Size, Availability Zone Response: Status Disk I/O:  ReadIOPS, WriteIOPS, ReadLatency, WriteLatency, ReadThroughput, WriteThroughput DB Utilization:  BinLogDiskUsage, CPUUtilization, DatabaseConnections, FreeableMemory, ReplicaLag, SwapUsage Custom Home Pages As mentioned above, we have custom home pages for these target types that include basic configuration information,  last 24 hours availability, top metrics and the incidents generated. Here are few snapshots. EBS Instance Home Page: EC2 Instance Home Page: RDS Instance Home Page: Further Reading: 1)      AWS Plugin download 2)      Installation and  Read Me. 3)      Screenwatch on SlideShare 4)      Extensibility Programmer's Guide 5)      Amazon Web Services

    Read the article

  • Getting this CSS to work in IE6

    - by jerrygarciuh
    Hi folks, Working on this page: http://www.karlsenner.dreamhosters.com/about.php and having trouble with the navigation in IE6. It validates as XHTML 1.0 Transitional. Works great in FF, IE 8, Chrome, and Windows Safari. In IE6 and Opera 10 the drop menus appear too high. I tried adding in the different versions of http://code.google.com/p/ie7-js/ but it did not solve the issue in IE. The CSS looks like this: #wrapper { position: relative; display: block; background-color: inherit; margin: 0px auto; padding: 0; width: 900px; min-height: 900px; } #nav {} .navImage { position:relative; display:inline; height:102px; /* added in hopes of helping IE position but no dice */ } .subMenu { position:absolute; z-index:10; background-color:#FFF; top: 14px; left:0; } .subMenu a:link, .subMenu a:visited, .subMenu a:active{ display:block; width:90%; padding:6px; margin:0; color:#3CF; font-family:Tahoma, Geneva, sans-serif; font-size:14px; text-decoration:none; font-weight:bold; } .subMenu a:hover{ display:block; width:90%; padding:6px; margin:0; color:#3CF; background-color:#CCC; font-family:Tahoma, Geneva, sans-serif; font-size:14px; text-decoration:none; font-weight:bold; } jQuery rollovers: $('#navcompany').hover(function () { $('#companyMenu').css('display', 'block'); $('#companyImg').attr('src','g/nav/company_over.gif'); }, function () { $('#companyMenu').css('display', 'none'); $('#companyImg').attr('src','g/nav/company.gif'); }); And one of the cells. Since the menu is coming out of PHP and IE was not respecting the widths I just use PHP to get the nav image widths and write them to styles on the fly. Solved the width issue as IE acted like they should inherit their width from the wrapper. This may be a clue as to why they don't appear below their nav images but I can't sort it. <div id="navcompany" class="navImage" style="width:128px"> <a href="about.php"> <img src="g/nav/company_over.gif" name="companyImg" width="128" height="102" border="0" id="companyImg" alt="company" /> </a> <div id="companyMenu" class="subMenu" style="display:none; width:128px"> <a href="about.php">About us</a> <a href="location.php">Our location</a> </div> </div> Any advice greatly appreciated! JG

    Read the article

  • Session Cookies and IE 8

    - by Matt Luongo
    I recently built a simple web-app deployed over Tomcat. The app uses pretty standard session based security where a user who has logged in is given a session. Sessions work fine in Firefox and Chrome, but require the use of jsessionid in the URL for IE (tested 7 & 8), set to medium privacy. In IE 8, I tried to override cookie handling, setting "Allow all 3rd party cookies" and "Allow all session cookies"- no dice. However, when I run Tomcat on my local machine, IE accepts the cookie, and sessions work just fine. And now, for the HTTP headers. From Chrome, a logged in user gets a session GET http://devl:8080/testing/ HTTP/1.1 Host: devl:8080 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1036 Safari/532.5 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP/1.1 200 OK Server: Apache-Coyote/1.1 P3P: CP="NON CURa ADMa DEVa TAIa OUR BUS IND UNI COM NAV INT STA" Set-Cookie: JSESSIONID=9280023BCE2046F32B13C89130CBC397; Path=/testing Content-Type: text/html;charset=UTF-8 Content-Language: en-US Content-Length: 2450 Date: Fri, 26 Mar 2010 14:14:40 GMT GET http://devl:8080/testing/logout HTTP/1.1 Host: devl:8080 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1036 Safari/532.5 Referer: http://devl:8080/testing/ Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: JSESSIONID=9280023BCE2046F32B13C89130CBC397 ... From IE 8, with standard medium level security and privacy- GET http://devl:8080/testing/ HTTP/1.1 Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */* Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64; Trident/4.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MDDC; Tablet PC 2.0) UA-CPU: AMD64 Accept-Encoding: gzip, deflate Host: devl:8080 Connection: Keep-Alive HTTP/1.1 200 OK Server: Apache-Coyote/1.1 P3P: CP="NON CURa ADMa DEVa TAIa OUR BUS IND UNI COM NAV INT STA" Set-Cookie: JSESSIONID=192999F922D6E9C868314452726764BA; Path=/testing Content-Type: text/html;charset=UTF-8 Content-Language: en-US Content-Length: 2450 Date: Fri, 26 Mar 2010 14:32:34 GMT GET http://devl:8080/testing/logout HTTP/1.1 Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */* Referer: http://devl:8080/testing/;jsessionid=6371A83EFE39A46997544F9146AA5CEA Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64; Trident/4.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MDDC; Tablet PC 2.0) UA-CPU: AMD64 Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: devl:8080 ... I thought it might be P3P, but on adding a compact policy, nothing changes. This is the standard Tomcat session, so I'm really surprised I haven't been able to find other people with the same problem so far. Anyone have any ideas?

    Read the article

  • PHP: Proper way of using a PDO database connection in a class

    - by Cortopasta
    Trying to organize all my code into classes, and I can't get the database queries to work inside a class. I tested it without the class wrapper, and it worked fine. Inside the class = no dice. What about my classes is messing this up? class ac { public function dbConnect() { global $dbcon; $dbInfo['server'] = "localhost"; $dbInfo['database'] = "sn"; $dbInfo['username'] = "sn"; $dbInfo['password'] = "password"; $con = "mysql:host=" . $dbInfo['server'] . "; dbname=" . $dbInfo['database']; $dbcon = new PDO($con, $dbInfo['username'], $dbInfo['password']); $dbcon->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $error = $dbcon->errorInfo(); if($error[0] != "") { print "<p>DATABASE CONNECTION ERROR:</p>"; print_r($error); } } public function authentication() { global $dbcon; $plain_username = $_POST['username']; $md5_password = md5($_POST['password']); $ac = new ac(); if (is_int($ac->check_credentials($plain_username, $md5_password))) { ?> <p>Welcome!</p> <!--go to account manager here--> <?php } else { ?> <p>Not a valid username and/or password. Please try again.</p> <?php unset($_POST['username']); unset($_POST['password']); $ui = new ui(); $ui->start(); } } private function check_credentials($plain_username, $md5_password) { global $dbcon; $userid = $dbcon->prepare('SELECT id FROM users WHERE username = :username AND password = :password LIMIT 1'); $userid->bindParam(':username', $plain_username); $userid->bindParam(':password', $md5_password); $userid->execute(); print_r($dbcon->errorInfo()); $id = $userid->fetch(); Return $id; } } And if it's any help, here's the class that's calling it: require_once("ac/acclass.php"); $ac = new ac(); $ac->dbconnect(); class ui { public function start() { if ((!isset($_POST['username'])) && (!isset($_POST['password']))) { $ui = new ui(); $ui->loginform(); } else { $ac = new ac(); $ac->authentication(); } } private function loginform() { ?> <form id="userlogin" action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> User:<input type="text" name="username"/><br/> Password:<input type="password" name="password"/><br/> <input type="submit" value="submit"/> </form> <?php } }

    Read the article

< Previous Page | 8 9 10 11 12 13  | Next Page >