Search Results

Search found 3459 results on 139 pages for 'if modified since'.

Page 115/139 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • /usr/bin/sshd isn't linked against PAM on one of my systems. What is wrong and how can I fix it?

    - by marc.riera
    Hi, I'm using AD as my user account server with ldap. Most of the servers run with UsePam yes except this one, it has lack of pam support on sshd. root@linserv9:~# ldd /usr/sbin/sshd linux-vdso.so.1 => (0x00007fff621fe000) libutil.so.1 => /lib/libutil.so.1 (0x00007fd759d0b000) libz.so.1 => /usr/lib/libz.so.1 (0x00007fd759af4000) libnsl.so.1 => /lib/libnsl.so.1 (0x00007fd7598db000) libcrypto.so.0.9.8 => /usr/lib/libcrypto.so.0.9.8 (0x00007fd75955b000) libcrypt.so.1 => /lib/libcrypt.so.1 (0x00007fd759323000) libc.so.6 => /lib/libc.so.6 (0x00007fd758fc1000) libdl.so.2 => /lib/libdl.so.2 (0x00007fd758dbd000) /lib64/ld-linux-x86-64.so.2 (0x00007fd759f0e000) I have this packages installed root@linserv9:~# dpkg -l|grep -E 'pam|ssh' ii denyhosts 2.6-2.1 an utility to help sys admins thwart ssh hac ii libpam-modules 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules for PAM ii libpam-runtime 0.99.7.1-5ubuntu6.1 Runtime support for the PAM library ii libpam-ssh 1.91.0-9.2 enable SSO behavior for ssh and pam ii libpam0g 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules library ii libpam0g-dev 0.99.7.1-5ubuntu6.1 Development files for PAM ii openssh-blacklist 0.1-1ubuntu0.8.04.1 list of blacklisted OpenSSH RSA and DSA keys ii openssh-client 1:4.7p1-8ubuntu1.2 secure shell client, an rlogin/rsh/rcp repla ii openssh-server 1:4.7p1-8ubuntu1.2 secure shell server, an rshd replacement ii quest-openssh 5.2p1_q13-1 Secure shell root@linserv9:~# What I'm doing wrong? thanks. Edit: root@linserv9:~# cat /etc/pam.d/sshd # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password Edit2: UsePAM yes fails With this configuration ssh fails to start : root@linserv9:/home/admmarc# cat /etc/ssh/sshd_config |grep -vE "^[ \t]*$|^#" Port 22 Protocol 2 ListenAddress 0.0.0.0 RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys ChallengeResponseAuthentication yes UsePAM yes Subsystem sftp /usr/lib/sftp-server root@linserv9:/home/admmarc# The error it gives is as follows root@linserv9:/home/admmarc# /etc/init.d/ssh start * Starting OpenBSD Secure Shell server sshd /etc/ssh/sshd_config: line 75: Bad configuration option: UsePAM /etc/ssh/sshd_config: terminating, 1 bad configuration options ...fail! root@linserv9:/home/admmarc#

    Read the article

  • Open-source and client-side IRC-Client

    - by user125197
    I'm administrating a homepage with an IRC-Webclient. At the moment, it uses a Java-client, modified to be SSL-compatible and compatible with whatever design I want to use. The usual PJIRC thing, with users wishes and concerns implemented (I'll give you SSL-PJIRC if you want - no problem. The login-script will be improved for IE6-support this weekend, means write the object tag if an old IE is found, that's it. Rest runs perfectly well). But still, the better user experience would be a client that only requires the user to enable JavaScript. So, before I go into rewriting and customizing a complete Java-IRC-solution, I'd like to ask some other people - after researching for half a year of time. The requirements are: a) Free hosting, no cgi-bin is acceptable. A lot of hosters also have CGI supported, but it doesn't allow IRC-access. So, seems you are limited to x10-hosting for CGI:IRC. b) Solution must be open source (not free as in free beer, but free as in free speech - I want to be able to read every line of the code). c) Hosting cannot be limited to Windows-hosting. Free operation systems (anything with Linux, BSD, whatever - I think you get what I mean) must be possible. d) No server-side technology. I'm limited to everything that runs client-side only. (Well, a Java-Applet does...). d) Solution must support SSL. e) The side must be able to move. Means, you register to a new free hoster, load your things up via FileZilla and the thing simply runs. No server-concerning implementations needed. f) Old browsers must be supported. From all my research, there are two ways a) Java-Applet b) Use HTML5-Websockets (it is impossible to use the JavaScript-library I found, as it depends on ActiveX - means, Windows hosting). b) means "you are going to enter very unstable content". I worked with HTML5 for month, and, well ... Though, HTML5 also is not suitable for older browsers (old browser support is an absolutely necessary requirement). PHP by the way is a server side solution ... so no line of PHP. My idea now was a Java Applet, and a fallback which again is embedded and proprietary, but only requieres JavaScript (wsirc or Mibbit are great here, whilst I prefer wsirc ... and zooming fonts that are plainly to small, but worse, cannot read the code). So my question is - with free hosting, not installation on server, plain upload - do you see and open source, complelety client-side, ssl-compatible way to use or write a client? This wouldn't be my first programming project, I'm not afraid of the code. If there was a way and I had to write it, even from scratch, I'd do. From all I know, there sadly is none. But maybe some experienced admin has an idea? Best Wishes, yetanotheruser Sry my English isn't perfect. It's enough for programming at least.

    Read the article

  • nconf nagios config no services defined

    - by user1508056
    I've setup Nagios core on OSX 10.7 server via macports fine. It seems to load fine and the sample config files all copied over to /opt/local/etc/nagios/objects/ fine and are specified correctly in the nagios.cfg file. I then installed nconf manually and got it running without much fight. Then I clicked on "Generate Nagios config" in nconf and get 1 warning and 4 errors. When I expand the error box here what I see: Nagios Core 3.5.0 Copyright (c) 2009-2011 Nagios Core Development Team and Community Contributors Copyright (c) 1999-2009 Ethan Galstad Last Modified: 03-15-2013 License: GPL Website: http://www.nagios.org Reading configuration data... Read main config file okay... Read object config files okay... Running pre-flight check on configuration data... Checking services... Error: There are no services defined! Checked 0 services. Checking hosts... Error: There are no hosts defined! Checked 0 hosts. Checking host groups... Checked 0 host groups. Checking service groups... Checked 0 service groups. Checking contacts... Error: There are no contacts defined! Checked 0 contacts. Checking contact groups... Checked 0 contact groups. Checking service escalations... Checked 0 service escalations. Checking service dependencies... Checked 0 service dependencies. Checking host escalations... Checked 0 host escalations. Checking host dependencies... Checked 0 host dependencies. Checking commands... Checked 0 commands. Checking time periods... Checked 0 time periods. Checking for circular paths between hosts... Checking for circular host and service dependencies... Checking global event handlers... Checking obsessive compulsive processor commands... Checking misc settings... Warning: Nothing specified for illegal_macro_output_chars variable! Total Warnings: 1 Total Errors: 3 I've tried several different things (played with cache settings, changed file permissions/ownership, edited some config files manually, etc.) but nothing gets me past this step. The thing is, when I run 'sudo nagios -v /opt/local/etc/nagios/nagios.cfg' the output shows it is reading a number of services, a localhost, and a contact in the .cfg files...so I'm pretty confident those are ok and the problem is nconf isnt reading the correct .cfg files or something like that. Any ideas what to double check? I did lots of googling and found nothing on this specific issue--so either I'm special (I'm not) or am overlooking something really simple. The path to nagios binary is listed as /opt/local/bin/nagios, if that matters. Also, all the nagios files are owned by nagios:nagios, wheras nconf files are owned by user, with only the directories/files specified in the nconf docs belonging to the _www user and/or group (things like output, temp, config, etc.). Thanks.

    Read the article

  • Clarification On Write-Caching Policy, Its Underlying Options And How It Applies To Hard Drives And Solid-State Drives

    - by Boris_yo
    In last week after doing more research on subject matter, I have been wondering about what I have been neglecting all those years to understand write-caching policy, always leaving it on default setting. Write-caching policy improves writing performance and consists of write-back caching and write-cache buffer flushing. This is how I understand all the above, but correct me if I erred somewhere: Write-through cache / Write-through caching itself is not a part of write caching policy per se and it's when data is written to both cache and storage device so if Windows will need that data later again, it is retrieved from cache and not from storage device which means only improved read performance as there is no need for waiting for storage device to read required data again. Since data is still written to storage device, write performance isn't improved and represents no risk of data loss or corruption in case of power failure or system crash while only data in cache gets lost. This option seems to be enabled by default and is recommended for removable devices with no need to use function of "Safely Remove Hardware" on user's part. Write-back caching is similar to above but without writing data to storage device, periodically releasing data from cache and writing to storage device when it is idle. In my opinion this option improves both read and write performance but represents risk if power failure or system crash occurs with the outcome of not only losing data eventually to be written to storage device, but causing file inconsistencies or corrupted file system. Write-back caching cannot be enabled together with write-through caching and it is not recommended to be enabled if no backup power supply is availabe. Write-cache buffer flushing I reckon is similar to write-back caching but enables immediate release and writing of data from cache to storage device right before power outage occurs but I don't know if it applies also to occasional system crash. This option seem to be complementary to write-back cache reducing or potentially eliminating risk of data loss or corruption of file system. I have questions about relevance of last 2 options to today's modern SSDs in order to get best performance and with less wear on SSDs: I know that traditional hard drives come with onboard cache (I wonder what type of cache that is), but do SSDs also come with cache? Assuming they do, is this cache faster than their NAND flash and system RAM and worth taking the risk of utilizing it by enabling write-back cache? I read somewhere that generally storage device's cache is faster than RAM, but I want to be sure. Additionally I read that write-caching should be enabled since current data that is to be written later to NAND flash is kept for a while in cache and provided there is data that gets modified a lot before finally being written, holding of this data and its periodic release reduces its write times to SSD thereby reducing its wearing. Now regarding to write-cache buffer flushing, I heard that SSD controllers are so fast by themselves that enabling this option is not required, because they manage flushing. However, once again, I don't know if SSDs have their own onboard cache and whether or not it is faster than their NAND flash and system RAM because if it is, keeping this option enabled would make sense. Recently I have posted question about issue with my Intel 330 SSD 120GB which was main reason to do deeper research having suspicion of write-caching policy being the culprit of SSD's freezing issue assuming data being released is what causes freezes. Currently I have write-cache enabled and write-cache buffer flushing disabled because I believe SSD controller's management of write-cache flushing and Windows write-cache buffer flushing are conflicting with each other: Since I want to troubleshoot in small steps to finally determine the source of issue, I have decided to start with write-caching policy and the move to drivers, switching to AHCI later on and finally disabling DIPM (device initiated power management) through registry modification thanks to @TomWijsman

    Read the article

  • I have a NGINX server configured to work with node.js, but many times a file of 1.03MB of js is not loaded by various browser and various pc

    - by Totty
    I'm using this in a local LAN so it should be quite fast. The nginx server use the node.js server to serve static files, so it must pass throught node.js to download the files, but that is not a problem when I'm not using the nginx. In chrome with debugger on I can see that the status is: 206 - partial content and it only has downloaded 31KB of 1.03MB. After 1.1 min it turns red and the status failed. Waiting time: 6ms Receiving: 1.1 min The headers in google chrom: Request URL:http://192.168.1.16/production/assembly/script/production.js Request Method:GET Status Code:206 Partial Content Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:pt-PT,pt;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:connect.sid=s%3Abls2qobcCaJ%2FyBNZwedtDR9N.0vD4Fi03H1bEdCszGsxIjjK0lZIjJhLnToWKFVxZOiE Host:192.168.1.16 If-Range:"1081715-1350053827000" Range:bytes=16090-16090 Referer:http://192.168.1.16/production/assembly/ User-Agent:Mozilla/5.0 (Windows NT 6.0) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Response Headersview source Accept-Ranges:bytes Cache-Control:public, max-age=0 Connection:keep-alive Content-Length:1 Content-Range:bytes 16090-16090/1081715 Content-Type:application/javascript Date:Mon, 15 Oct 2012 09:18:50 GMT ETag:"1081715-1350053827000" Last-Modified:Fri, 12 Oct 2012 14:57:07 GMT Server:nginx/1.1.19 X-Powered-By:Express My nginx configurations: File 1: user totty; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /home/totty/web/production01_server/node_modules/production/_logs/_NGINX_access.txt; error_log /home/totty/web/production01_server/node_modules/production/_logs/_NGINX_error.txt; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## autoindex on; include /home/totty/web/production01_server/_deployment/nginxConfigs/server/*; } File that is included by the previous file: server { # custom location for entry # using only "/" instead of "/production/assembly" it # would allow you to go to "thatip/". In this way # we are limiting to "thatip/production/assembly/" location /production/assembly/ { # ip and port used in node.js proxy_pass http://127.0.0.1:3000/; } location /production/assembly.mongo/ { proxy_pass http://127.0.0.1:9000/; proxy_redirect off; } location /production/assembly.logs/ { autoindex on; alias /home/totty/web/production01_server/node_modules/production/_logs/; } }

    Read the article

  • Scripting Windows Shares - VBS

    - by Calvin Piche
    So i am totally new to VBS, never used it. I am trying to create multiple shares and i found a Microsoft VBS script that can do this(http://gallery.technet.microsoft.com/scriptcenter/6309d93b-fcc3-4586-b102-a71415244712) My question is, this script only allows for one domain group or user to be added for permissions where i am needing to add a couple with different permissions(got that figured out) Below is the script that i have modified for my needs but just need to add in the second group with the other permissions. If there is an easier way to do this please let me know. 'ShareSetup.vbs '========================================================================== Option Explicit Const FILE_SHARE = 0 Const MAXIMUM_CONNECTIONS = 25 Dim strComputer Dim objWMIService Dim objNewShare strComputer = "." Set objWMIService = GetObject("winmgmts:" & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2") Set objNewShare = objWMIService.Get("Win32_Share") Call sharesec ("C:\Published Apps\Logs01", "Logs01", "Log01", "Support") Call sharesec2 ("C:\Published Apps\Logs01", "Logs01", "Log01", "Domain Admins") Sub sharesec(Fname,shr,info,account) 'Fname = Folder path, shr = Share name, info = Share Description, account = account or group you are assigning share permissions to Dim FSO Dim Services Dim SecDescClass Dim SecDesc Dim Trustee Dim ACE Dim Share Dim InParam Dim Network Dim FolderName Dim AdminServer Dim ShareName FolderName = Fname AdminServer = "\\" & strComputer ShareName = shr Set Services = GetObject("WINMGMTS:{impersonationLevel=impersonate,(Security)}!" & AdminServer & "\ROOT\CIMV2") Set SecDescClass = Services.Get("Win32_SecurityDescriptor") Set SecDesc = SecDescClass.SpawnInstance_() 'Set Trustee = Services.Get("Win32_Trustee").SpawnInstance_ 'Trustee.Domain = Null 'Trustee.Name = "EVERYONE" 'Trustee.Properties_.Item("SID") = Array(1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0) Set Trustee = SetGroupTrustee("domain", account) 'Replace ACME with your domain name. 'To assign permissions to individual accounts use SetAccountTrustee rather than SetGroupTrustee Set ACE = Services.Get("Win32_Ace").SpawnInstance_ ACE.Properties_.Item("AccessMask") = 1179817 ACE.Properties_.Item("AceFlags") = 3 ACE.Properties_.Item("AceType") = 0 ACE.Properties_.Item("Trustee") = Trustee SecDesc.Properties_.Item("DACL") = Array(ACE) Set Share = Services.Get("Win32_Share") Set InParam = Share.Methods_("Create").InParameters.SpawnInstance_() InParam.Properties_.Item("Access") = SecDesc InParam.Properties_.Item("Description") = "Public Share" InParam.Properties_.Item("Name") = ShareName InParam.Properties_.Item("Path") = FolderName InParam.Properties_.Item("Type") = 0 Share.ExecMethod_ "Create", InParam End Sub Sub sharesec2(Fname,shr,info,account) 'Fname = Folder path, shr = Share name, info = Share Description, account = account or group you are assigning share permissions to Dim FSO Dim Services Dim SecDescClass Dim SecDesc Dim Trustee Dim ACE2 Dim Share Dim InParam Dim Network Dim FolderName Dim AdminServer Dim ShareName FolderName = Fname AdminServer = "\\" & strComputer ShareName = shr Set Services = GetObject("WINMGMTS:{impersonationLevel=impersonate,(Security)}!" & AdminServer & "\ROOT\CIMV2") Set SecDescClass = Services.Get("Win32_SecurityDescriptor") Set SecDesc = SecDescClass.SpawnInstance_() 'Set Trustee = Services.Get("Win32_Trustee").SpawnInstance_ 'Trustee.Domain = Null 'Trustee.Name = "EVERYONE" 'Trustee.Properties_.Item("SID") = Array(1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0) Set Trustee = SetGroupTrustee("domain", account) 'Replace ACME with your domain name. 'To assign permissions to individual accounts use SetAccountTrustee rather than SetGroupTrustee Set ACE2 = Services.Get("Win32_Ace").SpawnInstance_ ACE2.Properties_.Item("AccessMask") = 1179817 ACE2.Properties_.Item("AceFlags") = 3 ACE2.Properties_.Item("AceType") = 0 ACE2.Properties_.Item("Trustee") = Trustee SecDesc.Properties_.Item("DACL") = Array(ACE2) End Sub Function SetAccountTrustee(strDomain, strName) set objTrustee = getObject("Winmgmts: {impersonationlevel=impersonate}!root/cimv2:Win32_Trustee").Spawninstance_ set account = getObject("Winmgmts: {impersonationlevel=impersonate}!root/cimv2:Win32_Account.Name='" & strName & "',Domain='" & strDomain &"'") set accountSID = getObject("Winmgmts: {impersonationlevel=impersonate}!root/cimv2:Win32_SID.SID='" & account.SID &"'") objTrustee.Domain = strDomain objTrustee.Name = strName objTrustee.Properties_.item("SID") = accountSID.BinaryRepresentation set accountSID = nothing set account = nothing set SetAccountTrustee = objTrustee End Function Function SetGroupTrustee(strDomain, strName) Dim objTrustee Dim account Dim accountSID set objTrustee = getObject("Winmgmts: {impersonationlevel=impersonate}!root/cimv2:Win32_Trustee").Spawninstance_ set account = getObject("Winmgmts:{impersonationlevel=impersonate}!root/cimv2:Win32_Group.Name='" & strName & "',Domain='" & strDomain &"'") set accountSID = getObject("Winmgmts: {impersonationlevel=impersonate}!root/cimv2:Win32_SID.SID='" & account.SID &"'") objTrustee.Domain = strDomain objTrustee.Name = strName objTrustee.Properties_.item("SID") = accountSID.BinaryRepresentation set accountSID = nothing set account = nothing set SetGroupTrustee = objTrustee End Function

    Read the article

  • Is there a bug with Apache 2.2 and content filters (and maybe mod_proxy)?

    - by asciiphil
    I'm running Apache 2.2.15-29 on RHEL 6 (actually Scientific Linux 6.4) and I'm trying to set up a reverse proxy with content rewriting so all of the links on the proxied web pages are rewritten to reference the proxy host. I'm running into a problem with some of the content rewriting and I'd like to know if this is a bug or if I'm doing something wrong (and how to do it right, if applicable). I'm proxying a subdirectory on an internal host (internal.example.com/foo) onto the root of an external host (external.example.com). I need to rewrite HTML, CSS, and Javascript content to fix all of the URLs. I'm also hosting some content locally on the external host, which I don't think is a problem but I'm mentioning here for completeness. My httpd.conf looks roughly like this: <VirtualHost *:80> ServerName external.example.com ServerAlias example.com # Serve all local content directly, reverse-proxy all unknown URIs. RewriteEngine On RewriteRule ^(/(index.html?)?)?$ http://internal.example.com/foo/ [P] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -f [OR] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -d RewriteRule ^.*$ - [L] RewriteRule ^/~ - [L] RewriteRule ^(.*)$ http://internal.example.com$1 [P] # Standard header rewriting. ProxyPassReverse / http://internal.example.com/foo/ ProxyPassReverseCookieDomain internal.example.com external.example.com ProxyPassReverseCookiePath /foo/ / # Strip any Accept-Encoding: headers from the client so we can process the pages # as plain text. RequestHeader unset Accept-Encoding # Use mod_proxy_html to fix URLs in text/html content. ProxyHTMLEnable On ProxyHTMLURLMap http://internal.example.com/foo/ / ProxyHTMLURLMap http://internal.example.com/foo / ProxyHTMLURLMap /foo/ / ## Use mod_substitute to fix URLs in CSS and Javascript #<Location /> # AddOutputFilterByType SUBSTITUTE text/css # AddOutputFilterByType SUBSTITUTE text/javascript # Substitute "s|http://internal.example.com/foo/|/|nq" #</Location> # Use mod_ext_filter to fix URLs in CSS and Javascript ExtFilterDefine fixurlcss mode=output intype=text/css cmd="/bin/sed -rf /etc/httpd/fixurls" ExtFilterDefine fixurljs mode=output intype=text/javascript cmd="/bin/sed -rf /etc/httpd/fixurls" <Location /> SetOutputFilter fixurlcss;fixurljs </Location> </VirtualHost> The text/html rewriting works just fine. When I use either mod_substitute or mod_ext_filter, the external server sends the pages as Transfer-Encoding: chunked, sends all of the data, and then closes the connection without sending the final, zero-length chunk. Some HTTP clients are unhappy with this. (Chrome won't process any content sent in this way, for example, so the pages don't get CSS applied to them.) Here's a sample wget session: $ wget -O /dev/null -S http://external.example.com/include/jquery.js --2013-11-01 11:36:36-- http://external.example.com/include/jquery.js Resolving external.example.com (external.example.com)... 192.168.0.1 Connecting to external.example.com (external.example.com)|192.168.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Date: Fri, 01 Nov 2013 15:36:36 GMT Server: Apache Last-Modified: Tue, 29 Oct 2013 13:09:10 GMT ETag: "1d60026-187b8-4e9e0ec273e35" Accept-Ranges: bytes Vary: Accept-Encoding X-UA-Compatible: IE=edge,chrome=1 Content-Type: text/javascript;charset=utf-8 Connection: close Transfer-Encoding: chunked Length: unspecified [text/javascript] Saving to: `/dev/null' [ <=> ] 100,280 --.-K/s in 0.005s 2013-11-01 11:36:37 (19.8 MB/s) - Read error at byte 100280 (Success).Retrying. --2013-11-01 11:36:38-- (try: 2) http://external.example.com/include/jquery.js Connecting to external.example.com (external.example.com)|192.168.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 416 Requested Range Not Satisfiable Date: Fri, 01 Nov 2013 15:36:38 GMT Server: Apache Vary: Accept-Encoding Content-Type: text/html;charset=utf-8 Content-Length: 260 Connection: close The file is already fully retrieved; nothing to do. Am I doing something wrong? Am I hitting some sort of Apache bug? What do I need to do to get it working? (Note that I'd prefer solutions that work within RHEL-6-packaged RPMs and upgrading to Apache 2.4 would be a last resort, as we have a lot of infrastructure built around 2.2 on this system at the moment.)

    Read the article

  • Listview Row Overlap Problem

    - by rgrandy
    I just updated my app and I am getting some odd complaints from people who update it. I am only getting complaints from people with non-stock android phones (phones that manufacturers have modified...HTC phones, cliq, pulse, etc), other phones like the Droid, Nexus work fine. My app (Photo Frame Deluxe) has a list in it with a Image View, Text View, View (spacer) and checkbox, all in a row. What happens on the affected phones is that the rows start overlapping and it cuts the top half of everything off. My layout code for this is below, I am pulling my hair out on this, what might I have wrong in this layout. Why does this work on some phones and not on others? Any help would be appreciated. Row Layout: <?xml version="1.0" encoding="UTF-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="wrap_content" android:orientation="horizontal"> <ImageView android:id="@+id/photorowIcon" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_vertical" android:paddingTop="10dp" android:paddingBottom="10dp" android:paddingRight="5dp" /> <TextView android:id="@+id/photorowText" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_vertical" /> <View android:layout_width="0px" android:layout_height="wrap_content" android:layout_weight="1"/> <CheckBox android:id="@+id/photorowCheckBox" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_vertical" android:focusable="false" android:focusableInTouchMode="false" /> </LinearLayout> Layout Row is inserted in: <?xml version="1.0" encoding="UTF-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical"> <LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:background="@drawable/title1_gradient" android:orientation="vertical"> <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="Select Photos to Display:" android:textSize="20sp" android:textStyle="bold" android:textColor="#FFFFFFFF" android:paddingLeft="5dp" android:paddingRight="5dp" android:paddingTop="5dp" /> <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:id="@+id/folderName" android:textSize="15sp" android:textStyle="bold" android:textColor="#FFFFFFFF" android:paddingLeft="5dp" android:paddingRight="5dp" android:paddingBottom="5dp" /> <View android:layout_width="fill_parent" android:layout_height="1px" android:background="#406C6C6C"/> </LinearLayout> <ListView android:id="@android:id/list" android:layout_width="fill_parent" android:layout_height="0px" android:layout_weight="1" android:drawSelectorOnTop="false" android:paddingLeft="5dp" android:paddingRight="5dp" /> <LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:orientation="vertical" android:layout_gravity="bottom"> <LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:orientation="horizontal" android:layout_gravity="bottom" android:background="#FF6C6C6C" android:padding="5dp"> <Button android:layout_width="fill_parent" android:layout_height="wrap_content" android:id="@+id/ok" android:text="OK"/> </LinearLayout> </LinearLayout> </LinearLayout>

    Read the article

  • How to get attribute value using SelectSingleNode?

    - by Nano HE
    I am parsing a xml document, I need find out the gid (an attribute) value (3810). Based on SelectSingleNode(). I found it is not easy to find the attribute name and it's value. Can I use this method or I must switch to other way. Attached my code. How can I use book obj to get the attribute value3810 for gid. Thank you. My test.xml file as below <?xml version="1.0" ?> <root> <VersionInfo date="2007-11-28" version="1.0.0.2" /> <Attributes> <AttrDir name="EFEM" DirID="1"> <AttrDir name="Aligner" DirID="2"> <AttrDir name="SequenceID" DirID="3"> <AttrObj text="Slot01" gid="3810" unit="" scale="1" /> <AttrObjCount value="1" /> </AttrDir> </AttrDir> </AttrDir> </Attributes> </root> I wrote the test.cs as below public class Sample { public static void Main() { XmlDocument doc = new XmlDocument(); doc.Load("test.xml"); XmlNode book; XmlNode root = doc.DocumentElement; book = root.SelectSingleNode("Attributes[AttrDir[@name='EFEM']/AttrDir[@name='Aligner']/AttrDir[@name='SequenceID']/AttrObj[@text='Slot01']]"); Console.WriteLine("Display the modified XML document...."); doc.Save(Console.Out); } } [Update 06/10/2010] The xml file is a complex file. Included thousands of gids. But for each of Xpath, the gid is unique. I load the xml file to a TreeView control. this.treeView1.AfterSelect += new System.Windows.Forms.TreeViewEventHandler(this.treeView1_AfterSelect);. When treeView1_AfterSelect event occurred, the e.Node.FullPath will return as a String Value. I parse the string Value e.Node.FullPath. Then I got the member of XPath Above. Then I tried to find which gid item was selected. I need find the gid value as a return value indeed.

    Read the article

  • System.ServiceModel.Syndication.SyndicationFeed throws when the RSS document includes a <script> blo

    - by Cheeso
    The code: using (XmlReader xmlr = XmlReader.Create(new StringReader(allXml))) { var items = from item in SyndicationFeed.Load(xmlr).Items select item; } The exception: Exception: System.Xml.XmlException: Unexpected node type Element. ReadElementString method can only be called on elements with simple or empty content. Line 11, position 25. at System.Xml.XmlReader.ReadElementString() at System.ServiceModel.Syndication.Rss20FeedFormatter.ReadXml(XmlReader reader, SyndicationFeed result) at System.ServiceModel.Syndication.Rss20FeedFormatter.ReadFeed(XmlReader reader) at System.ServiceModel.Syndication.Rss20FeedFormatter.ReadFrom(XmlReader reader) at System.ServiceModel.Syndication.SyndicationFeed.Load[TSyndicationFeed](XmlReader reader) at System.ServiceModel.Syndication.SyndicationFeed.Load(XmlReader reader) at Ionic.ToolsAndTests.ReadRss.Run() in c:\dev\dotnet\ReadRss.cs:line 90 The XML content: <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="https://www.ibm.com/developerworks/mydeveloperworks/blogs/roller-ui/styles/rss.xsl" media="screen"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" > <channel> <title>Software architecture, software engineering, and Renaissance Jazz</title> <link>https://www.ibm.com/developerworks/mydeveloperworks/blogs/gradybooch</link> <atom:link rel="self" type="application/rss+xml" href="https://www.ibm.com/developerworks/mydeveloperworks/blogs/gradybooch/feed/entries/rss?lang=en" /> <description>Software architecture, software engineering, and Renaissance Jazz</description> <language>en-us</language> <copyright>Copyright <script type='text/javascript'> document.write(blogsDate.date.localize (1273534889181));</script></copyright> <lastBuildDate>Mon, 10 May 2010 19:41:29 -0400</lastBuildDate> As you can see, on line 11, at position 25, there's a script block inside the <copyright> element. Other people have reported similar errors with other XML documents. The way I worked around this was to do a StreamReader.ReadToEnd, then do Regex.Replace on the result of that to yank out the script block, before passing the modified string to XmlReader.Create(). Feels like a hack. Has anyone got a better approach? I don't like this because I have to read in a 125k string into memory. Is it valid rss to include "complex content" like that - a script block inside an element?

    Read the article

  • Using backport-android-bluetooth on Android 1.6

    - by newth
    Hi, I'm trying to write an application using Bluetooth on Android 1.6. Since it's not officially supported, I found the backport of android.bluetooth API ( http://code.google.com/p/backport-android-bluetooth ). But when I deploy the sample chat application (modified for backport) LogCat gives me the error below: My question is, how I can use backport-android-bluetooth on 1.6 and is there any working samples? Thanks! 11-30 14:03:19.890: ERROR/AndroidRuntime(1927): Uncaught handler: thread main exiting due to uncaught exception 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): java.lang.ExceptionInInitializerError 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at backport.android.bluetooth.BluetoothSocket.<init>(BluetoothSocket.java:69) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at backport.android.bluetooth.BluetoothServerSocket.<init>(BluetoothServerSocket.java:16) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at backport.android.bluetooth.BluetoothAdapter.listenUsingRfcommWithServiceRecord(BluetoothAdapter.java:513) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at com.example.bluetooth.BluetoothChatService$AcceptThread.<init>(BluetoothChatService.java:237) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at com.example.bluetooth.BluetoothChatService.start(BluetoothChatService.java:109) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at com.example.bluetooth.BluetoothChat.onResume(BluetoothChat.java:138) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at android.app.Instrumentation.callActivityOnResume(Instrumentation.java:1225) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at android.app.Activity.performResume(Activity.java:3559) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at android.app.ActivityThread.performResumeActivity(ActivityThread.java:2838) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at android.app.ActivityThread.handleResumeActivity(ActivityThread.java:2866) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2420) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at android.app.ActivityThread.access$2100(ActivityThread.java:116) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1794) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at android.os.Handler.dispatchMessage(Handler.java:99) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at android.os.Looper.loop(Looper.java:123) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at android.app.ActivityThread.main(ActivityThread.java:4203) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at java.lang.reflect.Method.invokeNative(Native Method) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at java.lang.reflect.Method.invoke(Method.java:521) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at dalvik.system.NativeStart.main(Native Method) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): Caused by: java.lang.UnsatisfiedLinkError: classInitNative 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at android.bluetooth.RfcommSocket.classInitNative(Native Method) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): at android.bluetooth.RfcommSocket.<clinit>(RfcommSocket.java:152) 11-30 14:03:19.906: ERROR/AndroidRuntime(1927): ... 21 more

    Read the article

  • Sunrise / set calculations

    - by dassouki
    I'm trying to calculate the sunset / rise times using python based on the link provided below. My results done through excel and python do not match the real values. Any ideas on what I could be doing wrong? My Excel sheet can be found under .. http://transpotools.com/sun_time.xls # Created on 2010-03-28 # @author: dassouki # @source: [http://williams.best.vwh.net/sunrise_sunset_algorithm.htm][2] # @summary: this is based on the Nautical Almanac Office, United States Naval # Observatory. import math, sys class TimeOfDay(object): def calculate_time(self, in_day, in_month, in_year, lat, long, is_rise, utc_time_zone): # is_rise is a bool when it's true it indicates rise, # and if it's false it indicates setting time #set Zenith zenith = 96 # offical = 90 degrees 50' # civil = 96 degrees # nautical = 102 degrees # astronomical = 108 degrees #1- calculate the day of year n1 = math.floor( 275 * in_month / 9 ) n2 = math.floor( ( in_month + 9 ) / 12 ) n3 = ( 1 + math.floor( in_year - 4 * math.floor( in_year / 4 ) + 2 ) / 3 ) new_day = n1 - ( n2 * n3 ) + in_day - 30 print "new_day ", new_day #2- calculate rising / setting time if is_rise: rise_or_set_time = new_day + ( ( 6 - ( long / 15 ) ) / 24 ) else: rise_or_set_time = new_day + ( ( 18 - ( long/ 15 ) ) / 24 ) print "rise / set", rise_or_set_time #3- calculate sun mean anamoly sun_mean_anomaly = ( 0.9856 * rise_or_set_time ) - 3.289 print "sun mean anomaly", sun_mean_anomaly #4 calculate true longitude true_long = ( sun_mean_anomaly + ( 1.916 * math.sin( math.radians( sun_mean_anomaly ) ) ) + ( 0.020 * math.sin( 2 * math.radians( sun_mean_anomaly ) ) ) + 282.634 ) print "true long ", true_long # make sure true_long is within 0, 360 if true_long < 0: true_long = true_long + 360 elif true_long > 360: true_long = true_long - 360 else: true_long print "true long (360 if) ", true_long #5 calculate s_r_a (sun_right_ascenstion) s_r_a = math.degrees( math.atan( 0.91764 * math.tan( math.radians( true_long ) ) ) ) print "s_r_a is ", s_r_a #make sure it's between 0 and 360 if s_r_a < 0: s_r_a = s_r_a + 360 elif true_long > 360: s_r_a = s_r_a - 360 else: s_r_a print "s_r_a (modified) is ", s_r_a # s_r_a has to be in the same Quadrant as true_long true_long_quad = ( math.floor( true_long / 90 ) ) * 90 s_r_a_quad = ( math.floor( s_r_a / 90 ) ) * 90 s_r_a = s_r_a + ( true_long_quad - s_r_a_quad ) print "s_r_a (quadrant) is ", s_r_a # convert s_r_a to hours s_r_a = s_r_a / 15 print "s_r_a (to hours) is ", s_r_a #6- calculate sun diclanation in terms of cos and sin sin_declanation = 0.39782 * math.sin( math.radians ( true_long ) ) cos_declanation = math.cos( math.asin( sin_declanation ) ) print " sin/cos declanations ", sin_declanation, ", ", cos_declanation # sun local hour cos_hour = ( math.cos( math.radians( zenith ) ) - ( sin_declanation * math.sin( math.radians ( lat ) ) ) / ( cos_declanation * math.cos( math.radians ( lat ) ) ) ) print "cos_hour ", cos_hour # extreme north / south if cos_hour > 1: print "Sun Never Rises at this location on this date, exiting" # sys.exit() elif cos_hour < -1: print "Sun Never Sets at this location on this date, exiting" # sys.exit() print "cos_hour (2)", cos_hour #7- sun/set local time calculations if is_rise: sun_local_hour = ( 360 - math.degrees(math.acos( cos_hour ) ) ) / 15 else: sun_local_hour = math.degrees( math.acos( cos_hour ) ) / 15 print "sun local hour ", sun_local_hour sun_event_time = sun_local_hour + s_r_a - ( 0.06571 * rise_or_set_time ) - 6.622 print "sun event time ", sun_event_time #final result time_in_utc = sun_event_time - ( long / 15 ) + utc_time_zone return time_in_utc #test through main def main(): print "Time of day App " # test: fredericton, NB # answer: 7:34 am long = 66.6 lat = -45.9 utc_time = -4 d = 3 m = 3 y = 2010 is_rise = True tod = TimeOfDay() print "TOD is ", tod.calculate_time(d, m, y, lat, long, is_rise, utc_time) if __name__ == "__main__": main()

    Read the article

  • Get Local IP-Address using Boost.Asio

    - by MOnsDaR
    Hey, I'm currently searching for a portable way of getting the local IP-addresses. Because I'm using Boost anyway I thought it would be a good idea to use Boost.Asio for this task. There are serveral examples on the net which should do the trick. Examples: Official Boost.Asio Documentation Some Asian Page I tried both codes with just slight modifications. The Code on Boost.Doc was changed to not resolve "www.boost.org" but "localhost" or my hostname instead. For getting the hostname I used boost::asio::ip::host_name() or typed it directly as a string. Additionally I wrote my own code which was a merge of the above examples and my (little) knowledge I gathered from the Boost Documentation and other examples. All the sources worked, but they did just return the following IP: 127.0.1.1 (Thats not a typo, its .1.1 at the end) I run and compiled the code on Ubuntu 9.10 with GCC 4.4.1 A colleague tried the same code on his machine and got 127.0.0.2 (Not a typo too...) He compiled and run on Suse 11.0 with GCC 4.4.1 (I'm not 100% sure) I don't know if it is possible to change the localhost (127.0.0.1), but I know that neither me or my colleague did it. ifconfig says loopback uses 127.0.0.1. ifconfig also finds the public IP I am searching for (141.200.182.30 in my case, subnet is 255.255.0.0) So is this a Linux-issue and the code is not as portable as I thought? Do I have to change something else or is Boost.Asio not working as a solution for my problem at all? I know there are much questions about similar topics on Stackoverflow and other pages, but I cannot find information which is useful in my case. If you got useful links, it would be nice if you could point me to it. Thanks in advance, MOnsDaR PS: Here is the modified code I used from Boost.Doc: #include <boost/asio.hpp> using boost::asio::ip::tcp; boost::asio::io_service io_service; tcp::resolver resolver(io_service); tcp::resolver::query query(boost::asio::ip::host_name(), ""); tcp::resolver::iterator iter = resolver.resolve(query); tcp::resolver::iterator end; // End marker. while (iter != end) { tcp::endpoint ep = *iter++; std::cout << ep << std::endl; }

    Read the article

  • Using Joda DateTime as a Jersey parameter?

    - by HolySamosa
    I'd like to use Joda's DateTime for query parameters in Jersey, but this isn't supported by Jersey out-of-the-box. I'm assuming that implementing an InjectableProvider is the proper way to add DateTime support. Can someone point me to a good implementation of an InjectableProvider for DateTime? Or is there an alternative approach worth recommending? (I'm aware I can convert from Date or String in my code, but this seems like a lesser solution). Thanks. Solution: I modified Gili's answer below to use the @Context injection mechanism in JAX-RS rather than Guice. import com.sun.jersey.core.spi.component.ComponentContext; import com.sun.jersey.spi.inject.Injectable; import com.sun.jersey.spi.inject.PerRequestTypeInjectableProvider; import java.util.List; import javax.ws.rs.QueryParam; import javax.ws.rs.WebApplicationException; import javax.ws.rs.core.Context; import javax.ws.rs.core.Response; import javax.ws.rs.core.Response.Status; import javax.ws.rs.core.UriInfo; import javax.ws.rs.ext.Provider; import org.joda.time.DateTime; /** * Enables DateTime to be used as a QueryParam. * <p/> * @author Gili Tzabari */ @Provider public class DateTimeInjector extends PerRequestTypeInjectableProvider<QueryParam, DateTime> { private final UriInfo uriInfo; /** * Creates a new DateTimeInjector. * <p/> * @param uriInfo an instance of {@link UriInfo} */ public DateTimeInjector( @Context UriInfo uriInfo) { super(DateTime.class); this.uriInfo = uriInfo; } @Override public Injectable<DateTime> getInjectable(final ComponentContext cc, final QueryParam a) { return new Injectable<DateTime>() { @Override public DateTime getValue() { final List<String> values = uriInfo.getQueryParameters().get(a.value()); if( values == null || values.isEmpty()) return null; if (values.size() > 1) { throw new WebApplicationException(Response.status(Status.BAD_REQUEST). entity(a.value() + " may only contain a single value").build()); } return new DateTime(values.get(0)); } }; } }

    Read the article

  • MIPS: removing non alpha-numeric characters from a string

    - by Kron
    I'm in the process of writing a program in MIPS that will determine whether or not a user entered string is a palindrome. It has three subroutines which are under construction. Here is the main block of code, subroutines to follow with relevant info: .data Buffer: .asciiz " " # 80 bytes in Buffer intro: .asciiz "Hello, please enter a string of up to 80 characters. I will then tell you if that string was a palindrome!" .text main: li $v0, 4 # print_string call number la $a0, intro # pointer to string in memory syscall li $v0, 8 #syscall code for reading string la $a0, Buffer #save read string into buffer li $a1, 80 #string is 80 bytes long syscall li $s0, 0 #i = 0 li $t0, 80 #max for i to reach la $a0, Buffer jal stripNonAlpha li $v0, 4 # print_string call number la $a0, Buffer # pointer to string in memory syscall li $s0, 0 jal findEnd jal toUpperCase li $v0, 4 # print_string call number la $a0, Buffer # pointer to string in memory syscall Firstly, it's supposed to remove all non alpha-numeric characters from the string before hand, but when it encounters a character designated for removal, all characters after that are removed. stripNonAlpha: beq $s0, $t0, stripEnd #if i = 80 end add $t4, $s0, $a0 #address of Buffer[i] in $t4 lb $s1, 0($t4) #load value of Buffer[i] addi $s0, $s0, 1 #i = i + 1 slti $t1, $s1, 48 #if ascii code is less than 48 bne $t1, $zero, strip #remove ascii character slti $t1, $s1, 58 #if ascii code is greater than 57 #and slti $t2, $s1, 65 #if ascii code is less than 65 slt $t3, $t1, $t2 bne $t3, $zero, strip #remove ascii character slti $t1, $s1, 91 #if ascii code is greater than 90 #and slti $t2, $s1, 97 #if ascii code is less than 97 slt $t3, $t1, $t2 bne $t3, $zero, strip #remove ascii character slti $t1, $s1, 123 #if ascii character is greater than 122 beq $t1, $zero, strip #remove ascii character j stripNonAlpha #go to stripNonAlpha strip: #add $t5, $s0, $a0 #address of Buffer[i] in $t5 sb $0, 0($t4) #Buffer[i] = 0 #addi $s0, $s0, 1 #i = i + 1 j stripNonAlpha #go to stripNonAlpha stripEnd: la $a0, Buffer #save modified string into buffer jr $ra #return Secondly, it is supposed to convert all lowercase characters to uppercase. toUpperCase: beq $s0, $s2, upperEnd add $t4, $s0, $a0 lb $s1, 0($t4) addi $s1, $s1, 1 slti $t1, $s1, 97 #beq $t1, $zero, upper slti $t2, $s1, 123 slt $t3, $t1, $t2 bne $t1, $zero, upper j toUpperCase upper: add $t5, $s0, $a0 addi $t6, $t6, -32 sb $t6, 0($t5) j toUpperCase upperEnd: la $a0, Buffer jr $ra The final subroutine, which checks if the string is a palindrome isn't anywhere near complete at the moment. I'm having trouble finding the end of the string because I'm not sure what PC-SPIM uses as the carriage return character. Any help is appreciated, I have the feeling most of my problems result from something silly and stupid so feel free to point out anything, no matter how small.

    Read the article

  • Ajax call to wcf windows service over ssl (https)

    - by bpatrick100
    I have a windows service which exposes an endpoint over http. Again this is a windows service (not a web service hosted in iis). I then call methods from this endpoint, using javascript/ajax. Everything works perfectly, and this the code I'm using in my windows service to create the endpoint: //Create host object WebServiceHost webServiceHost = new WebServiceHost(svcHost.obj, new Uri("http://192.168.0.100:1213")); //Add Https Endpoint WebHttpBinding binding = new WebHttpBinding(); webServiceHost.AddServiceEndpoint(svcHost.serviceContract, binding, string.Empty); //Add MEX Behaivor and EndPoint ServiceMetadataBehavior metadataBehavior = new ServiceMetadataBehavior(); metadataBehavior.HttpGetEnabled = true; webServiceHost.Description.Behaviors.Add(metadataBehavior); webServiceHost.AddServiceEndpoint(ServiceMetadataBehavior.MexContractName, MetadataExchangeBindings.CreateMexHttpBinding(), "mex"); webServiceHost.Open(); Now, my goal is to get this same model working over SSL (https not http). So, I have followed the guidance of several msdn pages, like the following: http://msdn.microsoft.com/en-us/library/ms733791(VS.100).aspx I have used makecert.exe to create a test cert called "bpCertTest". I have then used netsh.exe to configure my port (1213) with the test cert I created, all with no problem. Then, I've modified the endpoint code in my windows service to be able to work over https as follows: //Create host object WebServiceHost webServiceHost = new WebServiceHost(svcHost.obj, new Uri("https://192.168.0.100:1213")); //Add Https Endpoint WebHttpBinding binding = new WebHttpBinding(); binding.Security.Mode = WebHttpSecurityMode.Transport; binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Certificate; webServiceHost.AddServiceEndpoint(svcHost.serviceContract, binding, string.Empty); webServiceHost.Credentials.ServiceCertificate.SetCertificate("CN=bpCertTest", StoreLocation.LocalMachine, StoreName.My); //Add MEX Behaivor and EndPoint ServiceMetadataBehavior metadataBehavior = new ServiceMetadataBehavior(); metadataBehavior.HttpsGetEnabled = true; webServiceHost.Description.Behaviors.Add(metadataBehavior); webServiceHost.AddServiceEndpoint(ServiceMetadataBehavior.MexContractName, MetadataExchangeBindings.CreateMexHttpsBinding(), "mex"); webServiceHost.Open(); The service creates the endpoint successfully, recognizes my cert in the SetCertificate() call, and the service starts up and running with success. Now, the problem is my javascript/ajax call cannot communicate with the service over https. I simply get some generic commication error (12031). So, as a test, I changed the port I was calling in the javascript to some other random port, and I get the same error - which tells me that I'm obviously not even reaching my service over https. I'm at a complete loss at this point, I feel like everything is in place, and I just can't see what the problem is. If anyone has experience in this scenario, please provide your insight and/or solution! Thanks!

    Read the article

  • Databinding to ObservableCollection in a different UserControl?

    - by Dave
    Question re-written on 2010-03-24 I have two UserControls, where one is a dialog that has a TabControl, and the other is one that appears within said TabControl. I'll just call them CandyDialog and CandyNameViewer for simplicity's sake. There's also a data management class called Tracker that manages information storage, which for all intents and purposes just exposes a public property that is an ObservableCollection. I display the CandyNameViewer in CandyDialog via code behind, like this: private void CandyDialog_Loaded( object sender, RoutedEventArgs e) { _candyviewer = new CandyViewer(); _candyviewer.DataContext = _tracker; candy_tab.Content = _candyviewer; } The CandyViewer's XAML looks like this (edited for kaxaml): <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Page.Resources> <DataTemplate x:Key="CandyItemTemplate"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="120"></ColumnDefinition> <ColumnDefinition Width="150"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBox Grid.Column="0" Text="{Binding CandyName}" Margin="3"></TextBox> <!-- just binding to DataContext ends up using InventoryItem as parent, so we need to get to the UserControl --> <ComboBox Grid.Column="1" SelectedItem="{Binding SelectedCandy, Mode=TwoWay}" ItemsSource="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type UserControl}}, Path=DataContext.CandyNames}" Margin="3"></ComboBox> </Grid> </DataTemplate> </Page.Resources> <Grid> <ListBox DockPanel.Dock="Top" ItemsSource="{Binding CandyBoxContents, Mode=TwoWay}" ItemTemplate="{StaticResource CandyItemTemplate}" /> </Grid> </Page> Now everything works fine when the controls are loaded. As long as CandyNames is populated first, and then the consumer UserControl is displayed, all of the names are there. I obviously don't get any errors in the Output Window or anything like that. The issue I have is that when the ObservableCollection is modified from the model, those changes are not reflected in the consumer UserControl! I've never had this problem before; all of my previous uses of ObservableCollection updated fine, although in those cases I wasn't databinding across assemblies. Although I am currently only adding and removing candy names to/from the ObservableCollection, at a later date I will likely also allow renaming from the model side. Is there something I did wrong? Is there a good way to actually debug this? Reed Copsey indicates here that inter-UserControl databinding is possible. Unfortunately, my favorite Bea Stollnitz article on WPF databinding debugging doesn't suggest anything that I could use for this particular problem.

    Read the article

  • Why is drawing to OnPaint graphics faster than image graphics?

    - by Tesserex
    I'm looking for a way to speed up the drawing of my game engine, which is currently the significant bottleneck, and is causing slowdowns. I'm on the verge of converting it over to XNA, but I just noticed something. Say I have a small image that I've loaded. Image img = Image.FromFile("mypict.png"); We have a picturebox on the screen we want to draw on. So we have a handler. pictureBox1.Paint += new PaintEventHandler(pictureBox1_Paint); I want our loaded image to be tiled on the picturebox (this is for a game, after all). Why on earth is this code: void pictureBox1_Paint(object sender, PaintEventArgs e) { for (int y = 0; y < 16; y++) for (int x = 0; x < 16; x++) e.Graphics.DrawImage(image, x * 16, y * 16, 16, 16); } over 25 TIMES FASTER than this code: Image buff = new Bitmap(256, 256, PixelFormat.Format32bppPArgb); // actually a form member void pictureBox1_Paint(object sender, PaintEventArgs e) { using (Graphics g = Graphics.FromImage(buff)) { for (int y = 0; y < 16; y++) for (int x = 0; x < 16; x++) g.DrawImage(image, x * 16, y * 16, 16, 16); } e.Graphics.DrawImage(buff, 0, 0, 256, 256); } To eliminate the obvious, I've tried commenting out the last e.Graphics.DrawImage (which means I don't see anything, but it gets rid a call that isn't in the first example). I've also left in the using block (needlessly) in the first example, but it's still just as blazingly fast. I've set properties of g to match e.Graphics - things like InterpolationMode, CompositingQuality, etc, but nothing I do bridges this incredible gap in performance. I can't find any difference between the two Graphics objects. What gives? My test with a System.Diagnostics.Stopwatch says that the first code snippet runs at about 7100 fps, while the second runs at a measly 280 fps. My reference image is VS2010ImageLibrary\Objects\png_format\WinVista\SecurityLock.png, which is 48x48 px, and which I modified to be 72 dpi instead of 96, but those made no difference either.

    Read the article

  • Detect an Uninstall in a Launch Condition using Wix MSIs

    - by coxymla
    I've been playing around with Wix, making a little app with auto-generated installer and three versions to test the upgradability, 1.0, 1.1 and 2.0. 1.1 is meant to be able to upgrade from 1.0, and not to allow the user to install 1.1 if 1.1 is already present. <Upgrade Id="F30C4129-F14E-43ee-BD5E-03AA89AD8E07"> <UpgradeVersion Minimum="1.0.0" IncludeMinimum="yes" Maximum="1.0.0" IncludeMaximum="yes" Property="OLDERVERSIONBEINGUPGRADED" /> <UpgradeVersion Minimum="1.1.0" IncludeMinimum="yes" OnlyDetect="yes" Property="NEWERVERSIONDETECTED" /> </Upgrade> <Condition Message="A later version of [ProductName] is already installed. Setup will now exit."> NOT (NEWERVERSIONDETECTED OR Installed) </Condition> Problem #1: 1.1 can't be uninstalled, because the condition is set and checked during the uninstall. 2.0 is meant to be able to upgrade from 1.1, and not to upgrade from 1.0 ('too old'.) It shouldn't be able to install on top of itself either. <Upgrade Id="F30C4129-F14E-43ee-BD5E-03AA89AD8E07"> <UpgradeVersion Minimum="1.1.0" IncludeMinimum="yes" Maximum="1.1.0" IncludeMaximum="yes" Property="OLDERVERSIONBEINGUPGRADED" /> </Upgrade> <Upgrade Id="F30C4129-F14E-43ee-BD5E-03AA89AD8E07"> <UpgradeVersion Minimum="2.0.0" OnlyDetect="yes" Property="NEWERVERSIONDETECTED" /> </Upgrade> <Upgrade Id="F30C4129-F14E-43ee-BD5E-03AA89AD8E07"> <UpgradeVersion Minimum="1.0.0" IncludeMinimum="yes" Maximum="1.0.0" IncludeMaximum="yes" Property="TOOOLDVERSIONDETECTED" /> </Upgrade> <Condition Message="A later version of [ProductName] is already installed. Setup will now exit."> NOT NEWERVERSIONDETECTED OR Installed </Condition> <Condition Message="A version of [ProductName] that is already installed is too old to be upgraded. Setup will now exit."> NOT TOOOLDVERSIONDETECTED </Condition> Problem #2: If I try to upgrade from 1.1, I hit my modified later version condition. (Error: A later version of Main Application 1.1 is already installed. Setup will now exit.) Problem #3: The installer allows me to install 2.0 over the top of itself. What am I doing wrong with my Upgrade code and conditions to get these problems in my MSIs?

    Read the article

  • How to track deleted self-tracking entities in ObservableCollection without memory leaks

    - by Yannick M.
    In our multi-tier business application we have ObservableCollections of Self-Tracking Entities that are returned from service calls. The idea is we want to be able to get entities, add, update and remove them from the collection client side, and then send these changes to the server side, where they will be persisted to the database. Self-Tracking Entities, as their name might suggest, track their state themselves. When a new STE is created, it has the Added state, when you modify a property, it sets the Modified state, it can also have Deleted state but this state is not set when the entity is removed from an ObservableCollection (obviously). If you want this behavior you need to code it yourself. In my current implementation, when an entity is removed from the ObservableCollection, I keep it in a shadow collection, so that when the ObservableCollection is sent back to the server, I can send the deleted items along, so Entity Framework knows to delete them. Something along the lines of: protected IDictionary<int, IList> DeletedCollections = new Dictionary<int, IList>(); protected void SubscribeDeletionHandler<TEntity>(ObservableCollection<TEntity> collection) { var deletedEntities = new List<TEntity>(); DeletedCollections[collection.GetHashCode()] = deletedEntities; collection.CollectionChanged += (o, a) => { if (a.OldItems != null) { deletedEntities.AddRange(a.OldItems.Cast<TEntity>()); } }; } Now if the user decides to save his changes to the server, I can get the list of removed items, and send them along: ObservableCollection<Customer> customers = MyServiceProxy.GetCustomers(); customers.RemoveAt(0); MyServiceProxy.UpdateCustomers(customers); At this point the UpdateCustomers method will verify my shadow collection if any items were removed, and send them along to the server side. This approach works fine, until you start to think about the life-cycle these shadow collections. Basically, when the ObservableCollection is garbage collected there is no way of knowing that we need to remove the shadow collection from our dictionary. I came up with some complicated solution that basically does manual memory management in this case. I keep a WeakReference to the ObservableCollection and every few seconds I check to see if the reference is inactive, in which case I remove the shadow collection. But this seems like a terrible solution... I hope the collective genius of StackOverflow can shed light on a better solution. Thanks!

    Read the article

  • Making mercurial subrepositories behave like subversion externals

    - by Emily Dickinson
    Hi guys, The FAQ, and hginit.com have been really useful for helping me make the transition from svn to hg. However, when it comes to using Hg's subrepository feature in the manner of subversion's externals, I've tried everythign and cannot replicate the nice behavior of svn externals. Here's the simplest example of what I want to do: Init "lib" repository This repository is never to be used as a standalone; it's always included by main repositories, as a sub-repository. Init one or more including repositories To keep the example simple, I'll "init" a repository called "main" Have "main" include "lib" as a subrepository Importantly -- AND HERE'S WHAT I CAN'T GET TO WORK: When I modify a file inside of "main/lib", and I push the modification, then that change gets pushed to the "lib" repository -- NOT to a copy inside of "main". Command lines speak louder than words. I've tried so many variations on this theme, but here's the gist. If someone can reply, in command lines, I'll be forever grateful! 1. Init "lib" repository $ cd /home/moi/hgrepos ## Where I'm storing my hg repositories, on my main server $ hg init lib $ echo "foo" lib/lib.txt $ hg add lib $ hg ci -A -m "Init lib" lib 2. Init "main" repository, and include "lib" as a subrepos $ cd /home/moi/hgrepos $ hg init main $ echo "foo" main/main.txt $ hg add main $ cd main $ hg clone ../lib lib $ echo "lib=lib" .hgsub $ hg ci -A -m "Init main" . This all works fine, but when I make a clone of the "main" repository, and make local modifications to files in "main/lib", and push them, the changes get pushed to "main/lib", NOT to "lib". IN COMMAND-LINE-ESE, THIS IS THE PROBLEM: $ /home/moi/hg-test $ hg clone ssh://[email protected]/hgrepos/lib lib $ hg clone ssh://[email protected]/hgrepos/main main $ cd main $ echo foo lib/lib.txt $ hg st M lib.txt $ hg com -m "Modified lib.txt, from inside the main repos" lib.txt $ hg push pushing to ssh://[email protected]/hgrepos/main/lib That last line of output from hg shows the problem. It shows that I've made a modification to a COPY of a file in lib, NOT to a file in the lib repository. If this were working as I'd like it to work, the push would be to hgrepos/lib, NOT to hgrepos/main/lib. I.e., I would see: $ hg push pushing to ssh://[email protected]/hgrepos/lib IF YOU CAN ANSWER THIS IN TERMS OF COMMAND LINES RATHER THAN IN ENGLISH, I WILL BE ETERNALLY GRATEFUL! Thank you in advance! Emily in Portland

    Read the article

  • Send User-Agent through CONNECT and POST with WinHTTP?

    - by Duncan Bayne
    I'm trying to POST to a secure site using WinHttp, and running into a problem where the User-Agent header isn't being sent along with the CONNECT. I am using a lightly-modified code sample from MSDN: HINTERNET hHttpSession = NULL; HINTERNET hConnect = NULL; HINTERNET hRequest = NULL; WINHTTP_AUTOPROXY_OPTIONS AutoProxyOptions; WINHTTP_PROXY_INFO ProxyInfo; DWORD cbProxyInfoSize = sizeof(ProxyInfo); ZeroMemory( &AutoProxyOptions, sizeof(AutoProxyOptions) ); ZeroMemory( &ProxyInfo, sizeof(ProxyInfo) ); hHttpSession = WinHttpOpen(L"WinHTTP AutoProxy Sample/1.0", WINHTTP_ACCESS_TYPE_NO_PROXY, WINHTTP_NO_PROXY_NAME, WINHTTP_NO_PROXY_BYPASS, 0); if(!hHttpSession) goto Exit; hConnect = WinHttpConnect( hHttpSession, L"server.com", INTERNET_DEFAULT_HTTPS_PORT, 0 ); if( !hConnect ) goto Exit; hRequest = WinHttpOpenRequest(hConnect, L"POST", L"/resource", NULL, WINHTTP_NO_REFERER, WINHTTP_DEFAULT_ACCEPT_TYPES, WINHTTP_FLAG_SECURE ); if( !hRequest ) goto Exit; WINHTTP_PROXY_INFO proxyInfo; proxyInfo.dwAccessType = WINHTTP_ACCESS_TYPE_NAMED_PROXY; proxyInfo.lpszProxy = L"192.168.1.2:3199"; proxyInfo.lpszProxyBypass = L""; WinHttpSetOption(hHttpSession, WINHTTP_OPTION_PROXY, &proxyInfo, sizeof(proxyInfo)); WinHttpSetCredentials(hRequest, WINHTTP_AUTH_TARGET_PROXY, WINHTTP_AUTH_SCHEME_BASIC, L"proxyuser", L"proxypass", NULL); if( !WinHttpSendRequest(hRequest, WINHTTP_NO_ADDITIONAL_HEADERS, 0, "content", 7, 7, 0)) { goto Exit; } if(!WinHttpReceiveResponse(hRequest, NULL)) goto Exit; /* handle result */ Exit: if( ProxyInfo.lpszProxy != NULL ) GlobalFree(ProxyInfo.lpszProxy); if( ProxyInfo.lpszProxyBypass != NULL ) GlobalFree( ProxyInfo.lpszProxyBypass ); if( hRequest != NULL ) WinHttpCloseHandle( hRequest ); if( hConnect != NULL ) WinHttpCloseHandle( hConnect ); if( hHttpSession != NULL ) WinHttpCloseHandle( hHttpSession ); What this does is connect to my server through an authenticated proxy at 192.168.1.2:3199, and make a POST. This works, but when I examine the proxy logs the User-Agent string ("WinHTTP AutoProxy Sample/1.0") is not being sent as part of the CONNECT. It is however sent as part of the POST. Could someone please tell me how I can change this code to have the User-Agent header sent during both the CONNECT and POST? Edited to add: we are observing this problem only on Windows 7. If we run the same code on a Windows Vista box, we can see the User-Agent header being sent on CONNECT.

    Read the article

  • Hooking DirectX EndScene from an injected DLL

    - by Etan
    I want to detour EndScene from an arbitrary DirectX 9 application to create a small overlay. As an example, you could take the frame counter overlay of FRAPS, which is shown in games when activated. I know the following methods to do this: Creating a new d3d9.dll, which is then copied to the games path. Since the current folder is searched first, before going to system32 etc., my modified DLL gets loaded, executing my additional code. Downside: You have to put it there before you start the game. Same as the first method, but replacing the DLL in system32 directly. Downside: You cannot add game specific code. You cannot exclude applications where you don't want your DLL to be loaded. Getting the EndScene offset directly from the DLL using tools like IDA Pro 4.9 Free. Since the DLL gets loaded as is, you can just add this offset to the DLL starting address, when it is mapped to the game, to get the actual offset, and then hook it. Downside: The offset is not the same on every system. Hooking Direct3DCreate9 to get the D3D9, then hooking D3D9-CreateDevice to get the device pointer, and then hooking Device-EndScene through the virtual table. Downside: The DLL cannot be injected, when the process is already running. You have to start the process with the CREATE_SUSPENDED flag to hook the initial Direct3DCreate9. Creating a new Device in a new window, as soon as the DLL gets injected. Then, getting the EndScene offset from this device and hooking it, resulting in a hook for the device which is used by the game. Downside: as of some information I have read, creating a second device may interfere with the existing device, and it may bug with windowed vs. fullscreen mode etc. Same as the third method. However, you'll do a pattern scan to get EndScene. Downside: doesn't look that reliable. How can I hook EndScene from an injected DLL, which may be loaded when the game is already running, without having to deal with different d3d9.dll's on other systems, and with a method which is reliable? How does FRAPS for example perform it's DirectX hooks? The DLL should not apply to all games, just to specific processes where I inject it via CreateRemoteThread.

    Read the article

  • Reliable way of generating unique hardware ID

    - by mr.b
    Question: what's the best way to accomplish following. I have to come up with unique ID for each networked client, such that: it (ID) should persist once client software is installed on target computer, and should continue to persist if software is re-installed on same computer and same OS installment, it should not change if hardware configuration is modified in most ways (except changing the motherboard) When hard drive with client software installed is cloned to another computer with identical hardware configuration (or, as similar as possible), client software should be aware of that change. A little bit of explanation and some back-story: This question is basically age old question that also touches topic of software copy-protection, as some of mechanisms used in that area are mentioned here. I should be clear at this point that I'm not looking for a copy-protection scheme. Please, read on. :) I'm working on a client-server software that is supposed to work in local network. One of problems I have to solve is to identify each unique client in network (not so much of a problem), so that I can apply certain attributes to every specific client, retain and enforce those attributes during deployment lifetime of a specific client. While I was looking for a solution, I was aware of following: Windows activation system uses some kind of heavy fingerprinting mechanism, that is extremely sensitive to hardware modifications, Disk imaging software copies along all Volume IDs (tied to each partition when formatted), and custom, uniquely generated IDs during installation process, during first run, or in any other way, that is strictly software in its nature, and stored in registry or on hard drive, so it's very easy to confuse two Obvious choice for this kind of problem would be to find out BIOS identifiers (not 100% sure if this is unique through identical motherboard models, though), as that's the only thing I can rely on, that isn't duplicated, transferred by cloning, and that can't be changed (at least not by using some user-space program). Everything else fails as either being not reliable (MAC cloning, anyone?), or too demanding (in terms that it's too sensitive to configuration changes). Am I missing something obvious here? Sub-question that I'd like to ask is, am I doing it correctly, architecture-wise? Perhaps there is a better tool for task that I have to accomplish... Another approach I had in mind is something similar to handshake mechanism, where server maintains internal lookup table of connected client IDs (which can be even completely software-based and non-unique at any given moment), and tells client to come up with different ID during handshake, if duplicate ID is provided upon connection. That approach, unfortunately, doesn't play nicely with one of requirements to tie attributes to specific client during lifetime.

    Read the article

  • Deletes not cascading for self-referencing entities

    - by jwaddell
    I have the following (simplified) Hibernate entities: @Entity @Table(name = "package") public class Package { protected Content content; @OneToOne(cascade = {javax.persistence.CascadeType.ALL}) @JoinColumn(name = "content_id") @Fetch(value = FetchMode.JOIN) public Content getContent() { return content; } public void setContent(Content content) { this.content = content; } } @Entity @Table(name = "content") public class Content { private Set<Content> subContents = new HashSet<Content>(); private ArchivalInformationPackage parentPackage; @OneToMany(fetch = FetchType.EAGER) @JoinTable(name = "subcontents", joinColumns = {@JoinColumn(name = "content_id")}, inverseJoinColumns = {@JoinColumn(name = "elt")}) @Cascade(value = {org.hibernate.annotations.CascadeType.DELETE, org.hibernate.annotations.CascadeType.REPLICATE}) @Fetch(value = FetchMode.SUBSELECT) public Set<Content> getSubContents() { return subContents; } public void setSubContents(Set<Content> subContents) { this.subContents = subContents; } @ManyToOne(cascade = {CascadeType.ALL}) @JoinColumn(name = "parent_package_id") public Package getParentPackage() { return parentPackage; } public void setParentPackage(Package parentPackage) { this.parentPackage = parentPackage; } } So there is one Package, which has one "top" Content. The top Content links back to the Package, with cascade set to ALL. The top Content may have many "sub" Contents, and each sub-Content may have many sub-Contents of its own. Each sub-Content has a parent Package, which may or may not be the same Package as the top Content (ie a many-to-one relationship for Content to Package). The relationships are required to be ManyToOne (Package to Content) and ManyToMany (Content to sub-Contents) but for the case I am currently testing each sub-Content only relates to one Package or Content. The problem is that when I delete a Package and flush the session, I get a Hibernate error stating that I'm violating a foreign key constraint on table subcontents, with a particular content_id still referenced from table subcontents. I've tried specifically (recursively) deleting the Contents before deleting the Package but I get the same error. Is there a reason why this entity tree is not being deleted properly? EDIT: After reading answers/comments I realised that a Content cannot have multiple Packages, and a sub-Content cannot have multiple parent-Contents, so I have modified the annotations from ManyToOne and ManyToMany to OneToOne and OneToMany. Unfortunately that did not fix the problem. I have also added the bi-directional link from Content back to the parent Package which I left out of the simplified code.

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >