Search Results

Search found 7595 results on 304 pages for 'functionality'.

Page 285/304 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • Linux IPTables / routing issue

    - by Jon
    Hi all, EDIT 1/3/10 22:00 GMT - rewrote some of it after further investigation It has been a while since I looked at IPtables and I seem to be worse than before as I can not seem to get my webserver online. Below is my firewall rules on the gateway server that is running the dhcp server accessing the net. The webserver is inside my network on a static IP (192.168.0.98, default port). When I use Nmap or GRC.com I see that port 80 is open on the gateway server but when I browse to it, (via public URL. http://www.houseofhawkins.com) it always fails with a connection error, (nmap cannot connect and figure out what the web server is either). I can nmap the webserver and browse to it just fine via same IP inside my network. I believe it is my IPTable rules that are not letting it through. Internally I can route all my requests. Each machine can browse to the website and traffic works just fine. I can MSTSC / ssh to all the webservers internally and they inturn can connect to the web. IPTABLE: *EDIT - Added new firewall rules 2/3/10 * #!/bin/sh iptables="/sbin/iptables" modprobe="/sbin/modprobe" depmod="/sbin/depmod" EXTIF="eth2" INTIF="eth1" load () { $depmod -a $modprobe ip_tables $modprobe ip_conntrack $modprobe ip_conntrack_ftp $modprobe ip_conntrack_irc $modprobe iptable_nat $modprobe ip_nat_ftp echo "enable forwarding.." echo "1" > /proc/sys/net/ipv4/ip_forward echo "enable dynamic addr" echo "1" > /proc/sys/net/ipv4/ip_dynaddr # start firewall # default policies $iptables -P INPUT DROP $iptables -F INPUT $iptables -P OUTPUT DROP $iptables -F OUTPUT $iptables -P FORWARD DROP $iptables -F FORWARD $iptables -t nat -F #echo " Opening loopback interface for socket based services." $iptables -A INPUT -i lo -j ACCEPT $iptables -A OUTPUT -o lo -j ACCEPT #echo " Allow all connections OUT and only existing and related ones IN" $iptables -A INPUT -i $INTIF -j ACCEPT $iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT $iptables -A OUTPUT -o $EXTIF -j ACCEPT $iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT $iptables -A FORWARD -i $EXTIF -o $INTIF -m state --state ESTABLISHED,RELATED -j ACCEPT $iptables -A FORWARD -i $INTIF -o $EXTIF -j ACCEPT $iptables -A FORWARD -j LOG --log-level 7 --log-prefix "Dropped by firewall: " $iptables -A INPUT -j LOG --log-level 7 --log-prefix "Dropped by firewall: " $iptables -A OUTPUT -j LOG --log-level 7 --log-prefix "Dropped by firewall: " #echo " Enabling SNAT (MASQUERADE) functionality on $EXTIF" $iptables -t nat -A POSTROUTING -o $EXTIF -j MASQUERADE $iptables -A INPUT -i $INTIF -j ACCEPT $iptables -A OUTPUT -o $INTIF -j ACCEPT #echo " Allowing packets with ICMP data (i.e. ping)." $iptables -A INPUT -p icmp -j ACCEPT $iptables -A OUTPUT -p icmp -j ACCEPT $iptables -A INPUT -p udp -i $INTIF --dport 67 -m state --state NEW -j ACCEPT #echo " Port 137 is for NetBIOS." $iptables -A INPUT -i $INTIF -p udp --dport 137 -j ACCEPT $iptables -A OUTPUT -o $INTIF -p udp --dport 137 -j ACCEPT #echo " Opening port 53 for DNS queries." $iptables -A INPUT -p udp -i $EXTIF --sport 53 -j ACCEPT #echo " opening Apache webserver" $iptables -A PREROUTING -t nat -i $EXTIF -p tcp --dport 80 -j DNAT --to 192.168.0.96:80 $iptables -A FORWARD -p tcp -m state --state NEW -d 192.168.0.96 --dport 80 -j ACCEPT } flush () { echo "flushing rules..." $iptables -P FORWARD ACCEPT $iptables -F INPUT $iptables -P INPUT ACCEPT echo "rules flushed" } case "$1" in start|restart) flush load ;; stop) flush ;; *) echo "usage: start|stop|restart." ;; esac exit 0 route info: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 5e0412a6.bb.sky * 255.255.255.255 UH 0 0 0 eth2 192.168.0.0 * 255.255.255.0 U 0 0 0 eth1 default 5e0412a6.bb.sky 0.0.0.0 UG 100 0 0 eth2 ifconfig: eth1 Link encap:Ethernet HWaddr 00:22:b0:cf:4a:1c inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::222:b0ff:fecf:4a1c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:79023 errors:0 dropped:0 overruns:0 frame:0 TX packets:57786 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:11580918 (11.5 MB) TX bytes:22872030 (22.8 MB) Interrupt:17 Base address:0x2b00 eth2 Link encap:Ethernet HWaddr 00:0c:f1:7c:45:5b inet addr:94.4.18.166 Bcast:94.4.18.166 Mask:255.255.255.255 inet6 addr: fe80::20c:f1ff:fe7c:455b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:57038 errors:0 dropped:0 overruns:0 frame:0 TX packets:34532 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:21631721 (21.6 MB) TX bytes:7685444 (7.6 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:16 errors:0 dropped:0 overruns:0 frame:0 TX packets:16 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1517 (1.5 KB) TX bytes:1517 (1.5 KB) EDIT OK so as requested I will try and expand on my infrastructure: I previously had it setup with a Sky broadband modem router that did the DHCP and I used its web interface to port forward the web across to the web server. The network looked something like this: I have now replaced the sky modem with a dlink modem which gives the IP to the gateway server that now does the DHCP. It looks like: The internet connection is a standard broadband connection with a dynamic IP, (use zoneedit.com to keep it updated). I have tried it on each of the webservers(one Ubuntu Apache server and one WS2008 IIS7). I think there must also be an issue with my IPTable rules as it can route to my win7 box which has the default IIS7 page and that would not display when I forwarded all port 80 to it. I would be really grateful for any and all help with this. Thanks Jon

    Read the article

  • SSH service will not start on fresh Cygwin 1.7.15 install

    - by Coder6841
    OS: Windows 7 x64 Cygwin: 1.7.15-1 OpenSSH: 6.0p1-1 I'm attempting to install an SSH server on Windows 7. The tutorial that I'm following to do this is here: http://www.howtogeek.com/howto/41560/how-to-get-ssh-command-line-access-to-windows-7-using-cygwin/ The issue is that upon executing the net start sshd command I get the following output:The CYGWIN sshd service is starting. The CYGWIN sshd service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. Here is the full output of the setup: AdminUser@ThisComputer ~ $ ssh-host-config *** Info: Generating /etc/ssh_host_key *** Info: Generating /etc/ssh_host_rsa_key *** Info: Generating /etc/ssh_host_dsa_key *** Info: Generating /etc/ssh_host_ecdsa_key *** Info: Creating default /etc/ssh_config file *** Info: Creating default /etc/sshd_config file *** Info: Privilege separation is set to yes by default since OpenSSH 3.3. *** Info: However, this requires a non-privileged account called 'sshd'. *** Info: For more info on privilege separation read /usr/share/doc/openssh/README.privsep. *** Query: Should privilege separation be used? (yes/no) yes *** Info: Note that creating a new user requires that the current account have *** Info: Administrator privileges. Should this script attempt to create a *** Query: new local account 'sshd'? (yes/no) yes *** Info: Updating /etc/sshd_config file *** Query: Do you want to install sshd as a service? *** Query: (Say "no" if it is already installed as a service) (yes/no) yes *** Query: Enter the value of CYGWIN for the daemon: [] *** Info: On Windows Server 2003, Windows Vista, and above, the *** Info: SYSTEM account cannot setuid to other users -- a capability *** Info: sshd requires. You need to have or to create a privileged *** Info: account. This script will help you do so. *** Info: You appear to be running Windows XP 64bit, Windows 2003 Server, *** Info: or later. On these systems, it's not possible to use the LocalSystem *** Info: account for services that can change the user id without an *** Info: explicit password (such as passwordless logins [e.g. public key *** Info: authentication] via sshd). *** Info: If you want to enable that functionality, it's required to create *** Info: a new account with special privileges (unless a similar account *** Info: already exists). This account is then used to run these special *** Info: servers. *** Info: Note that creating a new user requires that the current account *** Info: have Administrator privileges itself. *** Info: No privileged account could be found. *** Info: This script plans to use 'cyg_server'. *** Info: 'cyg_server' will only be used by registered services. *** Query: Do you want to use a different name? (yes/no) no *** Query: Create new privileged user account 'cyg_server'? (yes/no) yes *** Info: Please enter a password for new user cyg_server. Please be sure *** Info: that this password matches the password rules given on your system. *** Info: Entering no password will exit the configuration. *** Query: Please enter the password: *** Query: Reenter: *** Info: User 'cyg_server' has been created with password '[CENSORED]'. *** Info: If you change the password, please remember also to change the *** Info: password for the installed services which use (or will soon use) *** Info: the 'cyg_server' account. *** Info: Also keep in mind that the user 'cyg_server' needs read permissions *** Info: on all users' relevant files for the services running as 'cyg_server'. *** Info: In particular, for the sshd server all users' .ssh/authorized_keys *** Info: files must have appropriate permissions to allow public key *** Info: authentication. (Re-)running ssh-user-config for each user will set *** Info: these permissions correctly. [Similar restrictions apply, for *** Info: instance, for .rhosts files if the rshd server is running, etc]. *** Info: The sshd service has been installed under the 'cyg_server' *** Info: account. To start the service now, call `net start sshd' or *** Info: `cygrunsrv -S sshd'. Otherwise, it will start automatically *** Info: after the next reboot. *** Info: Host configuration finished. Have fun! AdminUser@ThisComputer ~ $ net start sshd The CYGWIN sshd service is starting. The CYGWIN sshd service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. Note that on the line *** Query: Enter the value of CYGWIN for the daemon: [] I haven't entered anything. Tutorials often say to use ntsec or ntsec tty here but those options are removed from the latest version of OpenSSH. I've tried using them anyway and the result is the same. The file /var/log/sshd.log is empty. If I try just running the command /usr/sbin/sshd I get the output /var/empty must be owned by root and not group or world-writable.. The /var/empty directory has the following permissions: drwxr-xr-x+ 1 cyg_server root 0 May 29 15:28 empty. Google searches on this error did not turn up any working fixes. One person seems to have solved it by using the command chown SYSTEM /var/empty but that did not fix it in my case.

    Read the article

  • 500 Internal Server Error with PHP application

    - by James
    I have written a PHP application using Windows and XAMPP. I've been trying to run it on Ubuntu 10.10 with Lighttpd 1.4.26. Parts of the application work fine, but whenever I try to log in, I get a 500 - Internal Server Error page. The only thing that shows up in /var/log/lighttpd/error.log is 2011-02-25 13:43:13: (mod_fastcgi.c.2582) unexpected end-of-file (perhaps the fastcgi process died): pid: 1169 socket: unix:/tmp/php.socket-0 2011-02-25 13:43:13: (mod_fastcgi.c.3367) response not received, request sent: 1596 on socket: unix:/tmp/php.socket-0 for /~denton/customer-facing-portal/index.php?, closing connection If I had any output whatsoever from PHP, this would be a lot easier to debug. Any ideas on how to get some? Here is my /etc/lighttpd/lighttpd.conf file: # Debian lighttpd configuration file # ############ Options you really have to take care of #################### ## modules to load server.modules = ( "mod_alias", "mod_compress", # "mod_rewrite", # "mod_redirect", # "mod_usertrack", # "mod_expire", # "mod_flv_streaming", # "mod_evasive", "mod_setenv" ) ## a static document-root, for virtual-hosting take look at the ## server.virtual-* options server.document-root = "/var/www/" ## where to upload files to, purged daily. server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" ## files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) ## Use the "Content-Type" extended attribute to obtain mime type if possible # mimetype.use-xattr = "enable" ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## Use ipv6 only if available. (disabled for while, check #560837) #include_shell "/usr/share/lighttpd/use-ipv6.pl" ## bind to port (default: 80) # server.port = 81 ## bind to localhost only (default: all interfaces) ## server.bind = "localhost" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts server.pid-file = "/var/run/lighttpd.pid" ## ## Format: <errorfile-prefix><status>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/var/www/" ## virtual directory listings dir-listing.encoding = "utf-8" server.dir-listing = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't change) server.username = "www-data" ## change gid to <gid> (default: don't change) server.groupname = "www-data" #### compress module compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ("text/plain", "text/html", "application/x-javascript", "text/css") #### url handling modules (rewrite, redirect, access) # url.rewrite = ( "^/$" => "/server-status" ) # url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### expire module # expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "access plus 1 seconds 2 minutes") #### external configuration files ## mimetype mapping include_shell "/usr/share/lighttpd/create-mime.assign.pl" ## load enabled configuration files, ## read /etc/lighttpd/conf-available/README first include_shell "/usr/share/lighttpd/include-conf-enabled.pl" ## Set environment variables setenv.add-environment = ( "DB_URL__DEMO" => "192.168.1.231", "DB_NAME_DEMO" => "demo", "DB_USER_DEMO" => "user", "DB_PASS_DEMO" => "password", "DB_AGENCY_DEMO" => "demo" ) Here is my /etc/php5/cgi/php.ini file (sans 1641 lines of comments): [PHP] register_long_arrays = Off short_open_tag = Off engine = On short_open_tag = Off asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT display_errors = On display_startup_errors = On log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = On html_errors = On variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off cgi.fix_pathinfo=1 file_uploads = On upload_max_filesize = 2M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] date.timezone = "America/Chicago" [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Pdo_mysql] pdo_mysql.cache_size = 2000 pdo_mysql.default_socket= [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [Interbase] ibase.allow_persistent = 1 ibase.max_persistent = -1 ibase.max_links = -1 ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ibase.dateformat = "%Y-%m-%d" ibase.timeformat = "%H:%M:%S" [MySQL] mysql.allow_local_infile = On mysql.allow_persistent = On mysql.cache_size = 2000 mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off [OCI8] [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [sysvshm] [ldap] ldap.max_links = -1 [mcrypt] [dba] Update: here is /etc/lighttpd/conf-enabled/15-fastcgi-php.conf As far as I know, it's just the default config file the Ubuntu package installed. ## FastCGI programs have the same functionality as CGI programs, ## but are considerably faster through lower interpreter startup ## time and socketed communication ## ## Documentation: /usr/share/doc/lighttpd-doc/fastcgi.txt.gz ## http://redmine.lighttpd.net/projects/lighttpd/wiki/Docs:ConfigurationOptions#mod_fastcgi-fastcgi ## Start an FastCGI server for php (needs the php5-cgi package) fastcgi.server += ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) )

    Read the article

  • NetApp erroring with: STATUS_NOLOGON_WORKSTATION_TRUST_ACCOUNT

    - by Sobrique
    Since a sitewide upgrade to Windows 7 on desktop, I've started having a problem with virus checking. Specifically - when doing a rename operation on a (filer hosted) CIFS share. The virus checker seems to be triggering a set of messages on the filer: [filerB: auth.trace.authenticateUser.loginTraceIP:info]: AUTH: Login attempt by user server-wk8-r2$ of domain MYDOMAIN from client machine 10.1.1.20 (server-wk8-r2). [filerB: auth.dc.trace.DCConnection.statusMsg:info]: AUTH: TraceDC- attempting authentication with domain controller \\MYDC. [filerB: auth.trace.authenticateUser.loginRejected:info]: AUTH: Login attempt by user rejected by the domain controller with error 0xc0000199: STATUS_NOLOGON_WORKSTATION_TRUST_ACCOUNT. [filerB: auth.trace.authenticateUser.loginTraceMsg:info]: AUTH: Delaying the response by 5 seconds due to continuous failed login attempts by user server-wk8-r2$ of domain MYDOMAIN from client machine 10.1.1.20. This seems to specifically trigger on a rename so what we think is going on is the virus checker is seeing a 'new' file, and trying to do an on-access scan. The virus checker - previously running as LocalSystem and thus sending null as it's authentication request is now looking rather like a DOS attack, and causing the filer to temporarily black list. This 5s lock out each 'access attempt' is a minor nuisance most of the time, and really quite significant for some operations - e.g. large file transfers, where every file takes 5s Having done some digging, this seems to be related to NLTM authentication: Symptoms Error message: System error 1808 has occurred. The account used is a computer account. Use your global user account or local user account to access this server. A packet trace of the failure will show the error as: STATUS_NOLOGON_WORKSTATION_TRUST_ACCOUNT (0xC0000199) Cause Microsoft has changed the functionality of how a Local System account identifies itself during NTLM authentication. This only impacts NTLM authentication. It does not impact Kerberos Authentication. Solution On the host, please set the following group policy entry and reboot the host. Network Security: Allow Local System to use computer identity for NTLM: Disabled Defining this group policy makes Windows Server 2008 R2 and Windows 7 function like Windows Server 2008 SP1. So we've now got a couple of workaround which aren't particularly nice - one is to change this security option. One is to disable virus checking, or otherwise exempt part of the infrastructure. And here's where I come to my request for assistance from ServerFault - what is the best way forwards? I lack Windows experience to be sure of what I'm seeing. I'm not entirely sure why NTLM is part of this picture in the first place - I thought we were using Kerberos authentication. I'm not sure how to start diagnosing or troubleshooting this. (We are going cross domain - workstation machine accounts are in a separate AD and DNS domain to my filer. Normal user authentication works fine however.) And failing that, can anyone suggest other lines of enquiry? I'd like to avoid a site wide security option change, or if I do go that way I'll need to be able to supply detailed reasoning. Likewise - disabling virus checking works as a short term workaround, and applying exclusions may help... but I'd rather not, and don't think that solves the underlying problem. EDIT: Filers in AD ldap have SPNs for: nfs/host.fully.qualified.domain nfs/host HOST/host.fully.qualified.domain HOST/host (Sorry, have to obfuscate those). Could it be that without a 'cifs/host.fully.qualified.domain' it's not going to work? (or some other SPN? ) Edit: As part of the searching I've been doing I've found: http://itwanderer.wordpress.com/2011/04/14/tread-lightly-kerberos-encryption-types/ Which suggests that several encryption types were disabled by default in Win7/2008R2. This might be pertinent, as we've definitely had a similar problem with Keberized NFSv4. There is a hidden option which may help some future Keberos users: options nfs.rpcsec.trace on (This hasn't given me anything yet though, so may just be NFS specific). Edit: Further digging has me tracking it back to cross domain authentication. It looks like my Windows 7 workstation (in one domain) is not getting Kerberos tickets for the other domain, in which my NetApp filer is CIFS joined. I've done this separately against a standalone server (Win2003 and Win2008) and didn't get Kerberos tickets for those either. Which means I think Kerberos might be broken, but I've no idea how to troubleshoot further. Edit: A further update: It looks like this may be down Kerberos tickets not being issued cross domain. This then triggers NTLM fallback, which then runs into this problem (since Windows 7). First port of call will be to investigate the Kerberos side of things, but in neither case do we have anything pointing at the Filer being the root cause. As such - as the storage engineer - it's out of my hands. However, if anyone can point me in the direction of troubleshooting Kerberos spanning two Windows AD domains (Kerberos Realms) then that would be appreciated. Options we're going to be considering for resolution: Amend policy option on all workstations via GPO (as above). Talking to AV vendor about the rename triggering scanning. Talking to AV vendor regarding running AV as service account. investigating Kerberos authentication (why it's not working, whether it should be).

    Read the article

  • Mount TMPFS instead of ro /dev

    - by schiggn
    I am working on a ARM-Based embedded system with a custom Debian Linux based on kernel 2.6.31. In the final system, the Root file system is stored as squashfs on flash. Now, the folder /dev is created by udev, but since there is no hot plugging functionality needed and booting time is critical, I wanted to delete udev and "hard code" the /dev folder (read here, page 5). because i still need to change parameters of the devices (with ioctl /sysfs) this does not work for me in this case. so i thought of mounting a tmpfs on /dev and change the parameters there. is this possible? and how to do best? my approach would be: delete /dev from RFS create tar containing basic devices mount tmpfs /dev untar tar-file into /dev change parameters Could this work? Do you see any problems? I found out, that you can mount on top of already mounted mount point, is it somehow possible just to take data with while mounting the new file system? if so that would be very convenient! Thanks Update: I just tried that out, but I'm stuck at a certain point. I packed all my devices into devices.tar, packed it into /usr of my squashfs and added the following lines to mountkernfs.sh, which is executed right after INIT. #mount /dev on tmpfs echo -n "Mounting /dev on tmpfs..." mount -o size=5M,mode=0755 -t tmpfs tmpfs /dev mknod -m 600 /dev/console c 5 1 mknod -m 600 /dev/null c 1 3 echo "done." echo -n "Populating /dev..." tar -xf /usr/devices.tar -C /dev echo "done." This works fine on the version over NFS, if I place printf's in the code, I can see it executing, if I comment out the extracting part, its complaining about missing devices. Booting OK mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. System Clock set to: Thu Sep 13 11:26:23 UTC 2012. INIT: Entering runlevel: 2 UBI: attaching mtd8 to ubi0 Commenting out the extraction of the tar mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. Cannot access the Hardware Clock via any known method. Use the --debug option to see the details of our search for an access method. Unable to set System Clock to: Thu Sep 13 12:24:00 UTC 2012 ... (warning). INIT: Entering runlevel: 2 libubi: error!: cannot open "/dev/ubi_ctrl" So far so good. But if I pack the whole story into a squashfs and boot from there, it is acting strange. It's telling me while booting that it is unable to open an initial console and its throwing errors on mounting the UBIFS devices, but finally provides a login anyway. Over that my echo's are not executed. If I then log in, /dev is mounted as TMPFS as desired and all the devices reside inside. When I redo the "mount" command to mount the UBIFS partitions it is executed whitout problem and useable. From squashfs VFS: Mounted root (squashfs filesystem) readonly on device 31:15. Freeing init memory: 136K Warning: unable to open an initial console. mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 UBIFS error (pid 484): ubifs_get_sb: cannot open "ubi1_0", error -19 Additionally, a part of the rest of the bootscripts is still exexuted, but not all of them. Does anyone has a clue why? Other question, is 5MB enough/too much for /dev?

    Read the article

  • Apache on Win32: Slow Transfers of single, static files in HTTP, fast in HTTPS

    - by Michael Lackner
    I have a weird problem with Apache 2.2.15 on Windows 2000 Server SP4. Basically, I am trying to serve larger static files, images, videos etc. The download seems to be capped at around 550kB/s even over 100Mbit LAN. I tried other protocols (FTP/FTPS/FTP+ES/SCP/SMB), and they are all in the multi-megabyte range. The strangest thing is that, when using Apache with HTTPS instead of HTTP, it serves very fast, around 2.7MByte/s! I also tried the AnalogX SimpleWWW server just to test the plain HTTP speed of it, and it gave me a healthy 3.3Mbyte/s. I am at a total loss here. I searched the web, and tried to change the following Apache configuration directives in httpd.conf, one at a time, mostly to no avail at all: SendBufferSize 1048576 #(tried multiples of that too, up to 100Mbytes) EnableSendfile Off #(minor performance boost) EnableMMAP Off Win32DisableAcceptEx HostnameLookups Off #(default) I also tried to tune the following registry parameters, setting their values to 4194304 in decimal (they are REG_DWORD), and rebooting afterwards: HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultReceiveWindow HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultSendWindow Additionally, I tried to install mod_bw, which sets the event timer precision to 1ms, and allows for bandwidth throttling. According to some people it boosts static file serving performance when set to unlimited bandwidth for everybody. Unfortunately, it did nothing for me. So: AnalogX HTTP: 3300kB/s Gene6 FTPD, plain: 3500kB/s Gene6 FTPD, Implicit and Explicit SSL, AES256 Cipher: 1800-2000kB/s freeSSHD: 1100kB/s SMB shared folder: about 3000kB/s Apache HTTP, plain: 550kB/s Apache HTTPS: 2700kB/s Clients that were used in the bandwidth testing: Internet Explorer 8 (HTTP, HTTPS) Firefox 8 (HTTP, HTTPS) Chrome 13 (HTTP, HTTPS) Opera 11.60 (HTTP, HTTPS) wget under CygWin (HTTP, HTTPS) FileZilla (FTP, FTPS, FTP+ES, SFTP) Windows Explorer (SMB) Generally, transfer speeds are not too high, but that's because the server machine is an old quad Pentium Pro 200MHz machine with 2GB RAM. However, I would like Apache to serve at at least 2Mbyte/s instead of 550kB/s, and that already works with HTTPS easily, so I fail to see why plain HTTP is so crippled. I am using a Kerio Winroute Firewall, but no Throttling and no special filters peeking into HTTP traffic, just the plain Firewall functionality for blocking/allowing connections. The Apache error.log (Loglevel info) shows no warnings, no errors. Also nothing strange to be seen in access.log. I have already stripped down my httpd.conf to the bare minimum just to make sure nothing is interfering, but that didn't help either. If you have any idea, help would be greatly appreciated, since I am totally out of ideas! Thanks! Edit: I have now tried a newer Apache 2.2.21 to see if it makes any difference. However, the behaviour is exactly the same. Edit 2: KM01 has requested a sniff on the HTTP headers, so here comes the LiveHTTPHeaders output (an extension to Firefox). The Output is generated on downloading a single file called "elephantsdream_source.264", which is an H.264/AVC elementary video stream under an Open Source license. I have taken the freedom to edit the URL, removing folders and changing the actual servers domain name to www.mydomain.com. Here it is: LiveHTTPHeaders, Plain HTTP: http://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:55:16 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain LiveHTTPHeaders, HTTPS: https://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:56:57 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain

    Read the article

  • BugZilla XML RPC Interface

    - by Damo
    I am attempting to setup BugZilla to receive but reports from another system using the XML-RPC interface. BugZilla works fine on its own with its own interface. When I attempt to test the XML-RPC functionality by accessing "xmlrpc.cgi" in my browser I get the error: The XML-RPC Interface feature is not available in this Bugzilla at C:\BugZilla\xmlrpc.cgi line 27 main::BEGIN(...) called at C:\BugZilla\xmlrpc.cgi line 29 eval {...} called at C:\BugZilla\xmlrpc.cgi line 29 Following this I install test-taint package from the default perl repository, this installs version 1.04. Re-running "xmlrpc.cgi" gives me an IIS error: 502 - Web server received an invalid response while acting as a gateway or proxy server. There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server. So I run the checksetup.pl which inform me that: Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Installing Test-Taint from CPAN is the same. I assume XML-RPC is relent on Test-Taint, but Test-Taint doesn't seem to run correctly. If I ignore this error and attempt to invoke "bz_webservice_demo.pl" to add an entry the script times out. How can I get the XML-RPC / Test-Taint function working ? Current Setup: IIS7.5 on Windows Server 2008 Bugzilla 4.2.2 Perl 5.14.2 C:\BugZilla>perl checksetup.pl Set up gcc environment - 3.4.5 (mingw-vista special r3) * This is Bugzilla 4.2.2 on perl 5.14.2 * Running on Win2008 Build 6002 (Service Pack 2) Checking perl modules... Checking for CGI.pm (v3.51) ok: found v3.59 Checking for Digest-SHA (any) ok: found v5.62 Checking for TimeDate (v2.21) ok: found v2.24 Checking for DateTime (v0.28) ok: found v0.76 Checking for DateTime-TimeZone (v0.79) ok: found v1.48 Checking for DBI (v1.614) ok: found v1.622 Checking for Template-Toolkit (v2.22) ok: found v2.24 Checking for Email-Send (v2.16) ok: found v2.198 Checking for Email-MIME (v1.904) ok: found v1.911 Checking for URI (v1.37) ok: found v1.59 Checking for List-MoreUtils (v0.22) ok: found v0.33 Checking for Math-Random-ISAAC (v1.0.1) ok: found v1.004 Checking for Win32 (v0.35) ok: found v0.44 Checking for Win32-API (v0.55) ok: found v0.64 Checking available perl DBD modules... Checking for DBD-Pg (v1.45) ok: found v2.18.1 Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for DBD-SQLite (v1.29) ok: found v1.33 Checking for DBD-Oracle (v1.19) ok: found v1.30 The following Perl modules are optional: Checking for GD (v1.20) ok: found v2.46 Checking for Chart (v2.1) ok: found v2.4.5 Checking for Template-GD (any) ok: found v1.56 Checking for GDTextUtil (any) ok: found v0.86 Checking for GDGraph (any) ok: found v1.44 Checking for MIME-tools (v5.406) ok: found v5.503 Checking for libwww-perl (any) ok: found v6.02 Checking for XML-Twig (any) ok: found v3.41 Checking for PatchReader (v0.9.6) ok: found v0.9.6 Checking for perl-ldap (any) ok: found v0.44 Checking for Authen-SASL (any) ok: found v2.15 Checking for RadiusPerl (any) ok: found v0.20 Checking for SOAP-Lite (v0.712) ok: found v0.715 Checking for JSON-RPC (any) ok: found v0.96 Checking for JSON-XS (v2.0) ok: found v2.32 Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Checking for Test-Taint (any) ok: found v1.04 Checking for HTML-Parser (v3.67) ok: found v3.68 Checking for HTML-Scrubber (any) ok: found v0.09 Checking for Encode (v2.21) ok: found v2.44 Checking for Encode-Detect (any) not found Checking for Email-MIME-Attachment-Stripper (any) ok: found v1.316 Checking for Email-Reply (any) ok: found v1.202 Checking for TheSchwartz (any) not found Checking for Daemon-Generic (any) not found Checking for mod_perl (v1.999022) not found Checking for Apache-SizeLimit (v0.96) not found *********************************************************************** * OPTIONAL MODULES * *********************************************************************** * Certain Perl modules are not required by Bugzilla, but by * * installing the latest version you gain access to additional * * features. * * * * The optional modules you do not have installed are listed below, * * with the name of the feature they enable. Below that table are the * * commands to install each module. * *********************************************************************** * MODULE NAME * ENABLES FEATURE(S) * *********************************************************************** * Encode-Detect * Automatic charset detection for text attachments * * TheSchwartz * Mail Queueing * * Daemon-Generic * Mail Queueing * * mod_perl * mod_perl * * Apache-SizeLimit * mod_perl * *********************************************************************** COMMANDS TO INSTALL OPTIONAL MODULES: Encode-Detect: ppm install Encode-Detect TheSchwartz: ppm install TheSchwartz Daemon-Generic: ppm install Daemon-Generic mod_perl: ppm install mod_perl Apache-SizeLimit: ppm install Apache-SizeLimit Reading ./localconfig... OPTIONAL NOTE: If you want to be able to use the 'difference between two patches' feature of Bugzilla (which requires the PatchReader Perl module as well), you should install patchutils from: http://cyberelk.net/tim/patchutils/ Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for MySQL (v5.0.15) ok: found v5.5.27 WARNING: You need to set the max_allowed_packet parameter in your MySQL configuration to at least 3276750. Currently it is set to 3275776. You can set this parameter in the [mysqld] section of your MySQL configuration file. Removing existing compiled templates... Precompiling templates...done. checksetup.pl complete.

    Read the article

  • DDNS Not Creating Journal (Dhcpd and Named)

    - by user130094
    * EDIT 1 * After monkeying with additional debug logging I see some log entries of interest. 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: journal rollforward failed: no more 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: not loaded due to errors. ^^^ If I can remedy the above messages I think I'll be good to go ^^^ * EDIT 2 * Grasping at straws I touched a forward and a reverse zone journal file and restarted named. Boom! Works. Despite documentation stating the files are created automatically and what I have seen before... dunno why but that did the trick. Also re-checked perms on the dir the files live in. As certain as I was, they were correct with named having rw. CentOS 6 (final) dhcpd 4.1.1-P1 named BIND 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6 Basic DHCP and DNS functionality are in place on 192.168.111.2. Clients are assigned addresses as intended and can resolve local DNS names as well as Internet names. My problem is that named's zone journal files are not created. chroot: /var/named/chroot I tried placing the zone files in various directories (/var/named/data, /var/named, /var/named/dynamic - no matter which dir with named owning and wide open perms I now get nowhere). Along the way I, at one point, got a permission denied when named tried to create the journal. Resolved the issue by: chown --recursive named:named /var/named chmod --recursive 777 /var/named The journal was then created and here's where things fell apart. I attempted to tame permissions to something more sane and broke it. Once changed and having restarted named it threw an error indicating the journal was out of sync (or something to that affect)... didn't matter since this is a new setup so I deleted it and now it is not recreated. Now though I see no errors in /var/log/messages, my chrooted /var/log/named.log, or chrooted /var/log/named.debug. I increased the debug level with 'rndc trace' - no love. Increased trace to 10, still nothing. SELinux is disabled... [root@server temp]# sestatus SELinux status: disabled dhcpd.conf... allow client-updates; ddns-update-style interim; subnet 192.168.111.0 netmask 255.255.255.224 { ... key dhcpudpate { algorithm hmac-md5; secret LDJMdPdEZED+/nN/AGO9ZA==; } zone example.lan. { primary 192.168.111.2; key dhcpudpate; } } named.conf... key dhcpudpate { algorithm hmac-md5; secret "LDJMdPdEZED+/nN/AGO9ZA=="; }; zone "example.lan" { type master; file "/var/named/dynamic/example.lan.db"; allow-transfer { none; }; allow-update { key dhcpudpate; }; notify false; check-names ignore; }; The following shows /var/log/named.log output of named starting up - no errors. 27-Jul-2012 21:33:39.349 general: info: zone 111.168.192.in-addr.arpa/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.349 general: info: zone example.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example2.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example3.lan/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.350 general: info: zone example4.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: zone example5.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: managed-keys-zone ./IN/internal: loaded serial 0 27-Jul-2012 21:33:39.351 general: info: zone example.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example1.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example2.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example3.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.353 general: info: managed-keys-zone ./IN/external: loaded serial 0 27-Jul-2012 21:33:39.353 general: notice: running 27-Jul-2012 21:34:03.825 general: info: received control channel command 'trace 10' 27-Jul-2012 21:34:03.825 general: info: debug level is now 10 ...and /var/log/messages for a named start... Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: BIND 9 is maintained by Internet Systems Consortium, Jul 27 23:02:04 server named[9124]: Inc. (ISC), a non-profit 501(c)(3) public-benefit Jul 27 23:02:04 server named[9124]: corporation. Support and training for BIND 9 are Jul 27 23:02:04 server named[9124]: available at https://www.isc.org/support Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: adjusted limit on open files from 4096 to 1048576 Jul 27 23:02:04 server named[9124]: found 2 CPUs, using 2 worker threads Jul 27 23:02:04 server named[9124]: using up to 4096 sockets Jul 27 23:02:04 server named[9124]: loading configuration from '/etc/named.conf' Jul 27 23:02:04 server named[9124]: using default UDP/IPv4 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: using default UDP/IPv6 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: listening on IPv4 interface eth0, 192.168.111.2#53 Jul 27 23:02:04 server named[9124]: generating session key for dynamic DNS Jul 27 23:02:04 server named[9124]: sizing zone task pool based on 12 zones Jul 27 23:02:04 server named[9124]: set up managed keys zone for view internal, file 'dynamic/3bed2cb3a3acf7b6a8ef408420cc682d5520e26976d354254f528c965612054f.mkeys' Jul 27 23:02:04 server named[9124]: set up managed keys zone for view external, file 'dynamic/3c4623849a49a53911c4a3e48d8cead8a1858960bccdea7a1b978d73ec2f06d7.mkeys' Jul 27 23:02:04 server named[9124]: command channel listening on 127.0.0.1#953 What can I do to troubleshoot this further? It almost seems as though dhcpd is not triggering the update. Maybe I should troubleshoot here and, if so, how? Many thanks.

    Read the article

  • Moving a Drupal between linux servers, best practice to avoid file-ownership problems

    - by zero
    I want to port over a Drupal commons 6x24 from a local LAMP-stack to a production webserver. Both systems run OpenSuse Linux. How do I do this, what are the most important steps. How should I handle file-ownership. It's important for me to have to have full control of the file ownership. If I use the wwwrun account, I frequently run into problems, due to a very strict webserver-admin. See for example the long history of looking for fixes and solutions see this thread and even more interesting see this very long and impressive thread here. All troubles I run into have to do with file-owernship and permissions. This is my current setup; Note: This was just a quick hacked installation - quick and dirty. Well my interest is after the general options i have in the port of a drupal from linux to linux linux-vi17:/srv/www/htdocs/com624 # ls -l insgesamt 224 -rwxrwxrwx 1 root www 45285 19. Jan 00:54 CHANGELOG.txt -rwxrwxrwx 1 root www 925 19. Jan 00:54 COPYRIGHT.txt -rwxrwxrwx 1 root www 206 19. Jan 00:54 cron.php drwxrwxrwx 2 root www 4096 19. Jan 00:54 includes -rwxrwxrwx 1 root www 923 19. Jan 00:54 index.php -rwxrwxrwx 1 root www 1244 19. Jan 00:54 INSTALL.mysql.txt -rwxrwxrwx 1 root www 1011 19. Jan 00:54 INSTALL.pgsql.txt -rwxrwxrwx 1 root www 47073 19. Jan 00:54 install.php -rwxrwxrwx 1 root www 15572 19. Jan 00:54 INSTALL.txt -rwxrwxrwx 1 root www 14940 19. Jan 00:54 LICENSE.txt -rwxrwxrwx 1 root www 1858 19. Jan 00:54 MAINTAINERS.txt drwxrwxrwx 3 root www 4096 19. Jan 00:54 misc drwxrwxrwx 35 root www 4096 19. Jan 00:54 modules drwxrwxrwx 4 root www 4096 19. Jan 00:54 profiles -rwxrwxrwx 1 root www 1470 19. Jan 00:54 robots.txt drwxrwxrwx 2 root www 4096 19. Jan 00:54 scripts drwxrwxrwx 4 root www 4096 19. Jan 00:54 sites drwxrwxrwx 7 root www 4096 19. Jan 00:54 themes -rwxrwxrwx 1 root www 26250 19. Jan 00:54 update.php -rwxrwxrwx 1 root www 4864 19. Jan 00:54 UPGRADE.txt -rwxrwxrwx 1 root www 294 19. Jan 00:54 xmlrpc.php linux-vi17:/srv/www/htdocs/com624 # thx to BetaRides answer here a quick overview on the drush functionality with rsync http://drush.ws/ core-rsync Rsync the Drupal tree to/from another server using ssh. Examples: drush rsync @dev @stage Rsync Drupal root from dev to stage (one of which must be local). drush rsync ./ @stage:%files/img Rsync all files in the current directory to the 'img' directory in the file storage folder on stage. Arguments: source May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. destination May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. Options: --mode The unary flags to pass to rsync; --mode=rultz implies rsync -rultz. Default is -az. --RSYNC-FLAG Most rsync flags passed to drush sync will be passed on to rsync. See rsync documentation. --exclude-conf Excludes settings.php from being rsynced. Default. --include-conf Allow settings.php to be rsynced --exclude-files Exclude the files directory. --exclude-sites Exclude all directories in "sites/" except for "sites/all". --exclude-other-sites Exclude all directories in "sites/" except for "sites/all" and the site directory for the site being synced. Note: if the site directory is different between the source and destination, use --exclude-sites followed by "drush rsync @from:%site @to:%site" --exclude-paths List of paths to exclude, seperated by : (Unix-based systems) or ; (Windows). --include-paths List of paths to include, seperated by : (Unix-based systems) or ; (Windows). Topics: docs-aliases Site aliases overview with examples Aliases: rsync

    Read the article

  • SPF record doesn't work (not sure which DNS server to tweak)

    - by Ion
    Problem: Google (and perhaps others) marks our emails as SPF neutral. Let me give you some background about the setup: initially got a dedicated server (Hetzner) with Plesk installed to host a domain/web application, let's say: bigjaws.com. Plesk automatically creates a DNS zone for it with some records for the various services it provides out of the box, e.g. webmail.bigjaws.com as a CNAME to bigjaws.com to provide Horde/whatever, etc. Let me point out four relevant of these records (where XXX.XXX.XXX.158 is our dedicated IP): bigjaws.com. A XXX.XXX.XXX.158 mail.bigjaws.com. A XXX.XXX.XXX.158 bigjaws.com MX (10) mail.bigjaws.com. bigjaws.com. TXT v=spf1 +a +mx -all The above records are not(?) valid anymore though, because after using this dedicated server for a while, our site got bigger and bigger so we decided to move our operations over to AWS (EC2, RDS, ELB, etc), but we retained the mail functionality as is, i.e. emails from [email protected] are sent by connecting to our dedicated server where Plesk takes care of things. This was decided in order not to setup anything from scratch. Of course for all DNS-related things we now use Route53. In Route53 I have the following records: mail.schoox.com. A XXX.XXX.XXX.158 bigjaws.com. MX (10) mail.bigjaws.com bigjaws.com. SPF "v=spf1 +ip4:XXX.XXX.XXX.158 +mx ~all" From my understanding of SPF, the SPF status should have been passed: I designate that all email being sent by bigjaws.com from XXX.XXX.XXX.158 are valid/not spam (I added +mx there but I'm not sure if needed). When a mail server receives an email, doesn't it lookup the SPF record of the domain and checks against the IP it got the email from? Checking with spfquery: root@box:~# spfquery -ip XXX.XXX.XXX.158 -sender [email protected] -rcpt-to [email protected] StartError Context: Failed to query MAIL-FROM ErrorCode: (2) Could not find a valid SPF record Error: No DNS data for 'bigjaws.com'. EndError noneneutral Please see http://www.openspf.org/Why?id=employee1%40bigjaws.com&ip=XXX.XXX.XXX.158&receiver=spfquery : Reason: default spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com Received-SPF: neutral (spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com) client-ip=XXX.XXX.XXX.158; [email protected]; If I go to the address listed above (openspf.org) it tells me that the message should have been accepted(!): spfquery rejected a message that claimed an envelope sender address of [email protected]. spfquery received a message from static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) that claimed an envelope sender address of [email protected]. The domain bigjaws.com has authorized static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) to send mail on its behalf, so the message should have been accepted. It is impossible for us to say why it was rejected. What should I do? If the problem persists, contact the bigjaws.com postmaster. Also, here are some headers from an email sent by one of our [email protected] addresses to a gmail.com address (by the way, bigjaws.de listed in the "Received: from" field was the initial domain hosted on the dedicated server before adding the .com one -- both are still listed as separate subscriptions under Plesk). Delivered-To: [email protected] Received: by 10.14.177.70 with SMTP id c46csp289656eem; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) X-Received: by 10.14.102.66 with SMTP id c42mr306186eeg.47.1382515860386; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) Return-Path: <[email protected]> Received: from bigjaws.de (static.158.XXX.XXX.XXX.clients.your-server.de. [XXX.XXX.XXX.158]) by mx.google.com with ESMTPS id l4si19438578eew.161.2013.10.23.01.10.59 for <[email protected]> (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 23 Oct 2013 01:10:59 -0700 (PDT) Received-SPF: neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=XXX.XXX.XXX.158; Authentication-Results: mx.google.com; spf=neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=bigjaws.com; b=WwRAS0WKjp9lO17iMluYPXOHzqRcOueiQT4rPdvy3WFf0QzoXiy6rLfxU/Ra53jL1vlPbwlLNa5gjoJBi7ZwKfUcvs3s02hJI7b3ozl0fEgJtTPKoCfnwl4bLPbtXNFu; h=Received:Received:Message-ID:Date:From:User-Agent:MIME-Version:To:Subject:Content-Type:Content-Transfer-Encoding; Received: (qmail 22722 invoked from network); 23 Oct 2013 10:10:59 +0200 Received: from hostname.static.ISP.com (HELO ?192.168.1.60?) (YYY.YYY.ISP.IP) by static.158.XXX.XXX.XXX.clients.your-server.de. with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 23 Oct 2013 10:10:59 +0200 Message-ID: <[email protected]> Date: Wed, 23 Oct 2013 11:11:00 +0300 From: BigJaws Employee <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0.1 MIME-Version: 1.0 To: [email protected] Subject: test SPF Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit test SPF Any ideas why SPF is not working correctly? Also, are there any DNS settings that are not needed anymore and create a problem?

    Read the article

  • Most secure way to access my home Linux server while I am on the road? Specialized solution wanted

    - by Ace Paus
    I think many people may be in my situation. I travel on business with a laptop. And I need secure access to files from the office (which in my case is my home). The short version of my question: How can I make SSH/SFTP really secure when only one person needs to connect to the server from one laptop? In this situation, what special steps would make it almost impossible for anyone else to get online access to the server? A lot more details: I use Ubuntu Linux on both my laptop (KDE) and my home/office server. Connectivity is not a problem. I can tether to my phone's connection if needed. I need access to a large number of files (around 300 GB). I don't need all of them at once, but I don't know in advance which files I might need. These files contain confidential client info and personal info such as credit card numbers, so they must be secure. Given this, I don't want store all these files on Dropbox or Amazon AWS, or similar. I couldn't justify that cost anyway (Dropbox don't even publish prices for plans above 100 GB, and security is a concern). However, I am willing to spend some money on a proper solution. A VPN service, for example, might be part of the solution? Or other commercial services? I've heard about PogoPlug, but I don't know if there is a similar service that might address my security concerns? I could copy all my files to my laptop because it has the space. But then I have to sync between my home computer and my laptop and I found in the past that I'm not very good about doing this. And if my laptop is lost or stolen, my data would be on it. The laptop drive is an SSD and encryption solutions for SSD drives are not good. Therefore, it seems best to keep all my data on my Linux file server (which is safe at home). Is that a reasonable conclusion, or is anything connected to the Internet such a risk that I should just copy the data to the laptop (and maybe replace the SSD with an HDD, which reduces battery life and performance)? I view the risks of losing a laptop to be higher. I am not an obvious hacking target online. My home broadband is cable Internet, and it seems very reliable. So I want to know the best (reasonable) way to securely access my data (from my laptop) while on the road. I only need to access it from this one computer, although I may connect from either my phone's 3G/4G or via WiFi or some client's broadband, etc. So I won't know in advance which IP address I'll have. I am leaning toward a solution based on SSH and SFTP (or similar). SSH/SFTP would provided about all the functionality I anticipate needing. I would like to use SFTP and Dolphin to browse and download files. I'll use SSH and the terminal for anything else. My Linux file server is set up with OpenSSH. I think I have SSH relatively secured. I'm using Denyhosts too. But I want to go several steps further. I want to get the chances that anyone can get into my server as close to zero as possible while still allowing me to get access from the road. I'm not a sysadmin or programmer or real "superuser". I have to spend most of my time doing other things. I've heard about "port knocking" but I have never used it and I don't know how to implement it (although I'm willing to learn). I have already read a number of articles with titles such as: Top 20 OpenSSH Server Best Security Practices 20 Linux Server Hardening Security Tips Debian Linux Stop SSH User Hacking / Cracking Attacks with DenyHosts Software more... I have not implemented every single thing I've read about. I probably can't do that. But maybe there is something even better I can do in my situation because I only need access from a single laptop. I'm just one user. My server does not need to be accessible to the general public. Given all these facts, I'm hoping I can get some suggestions here that are within my capability to implement and that leverage these facts to create a great deal better security than general purpose suggestions in the articles above.

    Read the article

  • jqgrid setting cutom formatter to dynamic column collection

    - by user312249
    I am using jqgrid. We are building a dashboard functionality with jquery. Different application just have to register respective application page and dashboard will render that page.To achieve this we are using jqgrid as one of the jquery plugin. Following is my codeenter code here var ph = '#' + placeHolder; var _prevSort; $.ajax({ url: dataUrl, dataType: "json", async: true, success: function(json) { pager = $('#' + pager); if (json.showPager === "false") { pager = eval(json.showPager); } dataUrl += "&jqSession=true"; $(ph).jqGrid({ url: dataUrl, datatype: "json", sortclass: "grid_sort", colNames: JSON.parse(json.colNames), colModel: JSON.parse(json.colModel), forceFit: true, rowNum: json.rowNum, rowList: JSON.parse(json.rowList), pager: pager, sortname: json.sortName, caption: json.caption, viewrecords: true, viewsortcols: true, sortorder: json.sortOrder, footerrow: summaryFooter, userDataOnFooter: summaryFooter, jsonReader: { root: "rows", row: "row", repeatitems: false, id: json.sortName }, gridComplete: function() { if (showFooter) { $(ph).append("" + json.footerRow + ""); } if (json.additionalContent != null) { $("#" + xContID).html(json.additionalContent); } $("ui-icon-asc").append("IMG"); var _rows = $(".jqgrow"); if (json.rows.length 0) { for (var i = 1; i < _rows.length; i += 1) { _rows[i].attributes["class"].value = _rows[i].attributes["class"].value.replace(" ui-jqgrid-altrow", ""); if (i % 2 == 1) { _rows[i].attributes["class"].value += " ui-jqgrid-altrow"; } } var gMaxHeight = getGridMaxHeight(); var gHeight = ($(ph + " tr").length + 1) * ($($(".jqgrow") [0]).height()); if (gHeight <= gMaxHeight) { $(ph).parent().height(gHeight); } else { $(ph).parent().height(gMaxHeight); } } else { $(ph).prepend("" + gridNoDataMsg + ""); $(ph).parent().height(60); } }, onSortCol: function(index, iCol, sortorder) { dataUrl = dataUrl.replace("&jqSession=true", ""); $(ph).jqGrid().setGridParam({ url: dataUrl }).trigger("reloadGrid"); var colName = "#jqgh" + index; // $(_prevSort).parent().removeClass("ui-jqgrid-sorted"); // $(_prevSort).parent().addClass("ui-state-default"); // $(_colName).parent().addClass("ui-jqgrid-sorted"); // $(_colName).parent().removeClass("ui-state-default"); _prevSort = _colName; var _rows = $(".jqgrow"); for (var i = 1; i < _rows.length; i += 1) { _rows[i].attributes["class"].value = _rows[i].attributes["class"].value.replace(" ui-jqgrid-altrow", ""); if (i % 2 == 1) { _rows[i].attributes["class"].value += " ui-jqgrid-altrow"; } } } }).navGrid('#' + pager, { search: false, sort: false, edit: false, add: false, del: false, refresh: false }); // end of grid $("#" + loadid).empty(); gGridIds[gGridIds.length] = placeHolder; SetGridSizes(); }, error: function() { $("#" + loadid).html(loadingErr); } }); As you can see from the code i am getting column collection dynamically(Appication page which i am calling will give me JSON in the response and will have colNames collection in it. Evrything is working fine but, only issue is coming when we are trying to apply custom formatter to column. This issue comes only when we are dynamically assign "colModel" to jqgrid. Appreciate help Thanks in advance

    Read the article

  • Set the JAXB context factory initialization class to be used

    - by user1902288
    I have updated our projects (Java EE based running on Websphere 8.5) to use a new release of a company internal framework (and Ejb 3.x deployment descriptors rather than the 2.x ones). Since then my integration Tests fail with the following exception: [java.lang.ClassNotFoundException: com.ibm.xml.xlxp2.jaxb.JAXBContextFactory] I can build the application with the previous framework release and everything works fine. While debugging i noticed that within the ContextFinder (javax.xml.bind) there are two different behaviours: Previous Version (Everything works just fine): None of the different places brings up a factory class so the default factory class gets loaded which is com.sun.xml.internal.bind.v2.ContextFactory (defined as String constant within the class). Upgraded Version (ClassNotFound): There is a resource "META-INF/services/javax.xml.bind.JAXBContext" beeing loaded successfully and the first line read makes the ContextFinder attempt to load "com.ibm.xml.xlxp2.jaxb.JAXBContextFactory" which causes the error. I now have two questions: What sort is that resource? Because inside our EAR there is two WARs and none of those two contains a folder services in its META-INF directory. Where could that value be from otherwise? Because a filediff showed me no new or changed properties files. No need to say i am going to read all about the JAXB configuration possibilities but if you have first insights on what could have gone wrong or help me out with that resource (is it a real file i have to look for?) id appreciate a lot. Many Thanks! EDIT (according to comments Input/Questions): Out of curiosity, does your framework include JAXB JARs? Did the old version of your framework include jaxb.properties? Indeed (i am a bit surprised) the framework has a customized eclipselink-2.4.1-.jar inside the EAR that includes both a JAXB implementation and a jaxb.properties file that shows the following entry in both versions (the one that finds the factory as well as in the one that throws the exception): javax.xml.bind.context.factory=org.eclipse.persistence.jaxb.JAXBContextFactory I think this is has nothing to do with the current issue since the jar stayed exactly the same in both EARs (the one that runs/ the one with the expection) It's also not clear to me why the old version of the framework was ever selecting the com.sun implementation There is a class javax.xml.bind.ContextFinder which is responsible for initializing the JAXBContextFactory. This class searches various placess for the existance of a jaxb.properties file or a "javax.xml.bind.JAXBContext" resource. If ALL of those places dont show up which Context Factory to use there is a deault factory loaded which is hardcoded in the class itself: private static final String PLATFORM_DEFAULT_FACTORY_CLASS = "com.sun.xml.internal.bind.v2.ContextFactory"; Now back to my problem: Building with the previous version of the framework (and EJB 2.x deployment descriptors) everything works fine). While debugging i can see that there is no configuration found and thatfore above mentioned default factory is loaded. Building with the new version of the framework (and EJB 3.x deployment descriptors so i can deploy) ONLY A TESTCASE fails but the rest of the functionality works (like i can send requests to our webservice and they dont trigger any errors). While debugging i can see that there is a configuration found. This resource is named "META-INF/services/javax.xml.bind.JAXBContext". Here are the most important lines of how this resource leads to the attempt to load 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' which then throws the ClassNotFoundException. This is simplified source of the mentioned javax.xml.bind.ContextFinder class: URL resourceURL = ClassLoader.getSystemResource("META-INF/services/javax.xml.bind.JAXBContext"); BufferedReader r = new BufferedReader(new InputStreamReader(resourceURL.openStream(), "UTF-8")); String factoryClassName = r.readLine().trim(); The field factoryClassName now has the value 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' (The day i understand how to format source code on stackoverflow will be my biggest step ahead.... sorry for the formatting after 20 mins it still looks the same :() Because this has become a super lager question i will also add a bounty :)

    Read the article

  • using Silverlight 3's HtmlPage.Window.Navigate method to reuse an already open browser window

    - by Phil
    Hi, I want to use an external browser window to implement a preview functionality in a silverlight application. There is a list of items and whenever the user clicks one of these items, it's opened in a separate browser window (the content is a pdf document, which is why it is handled ouside of the SL app). Now, to achieve this, I simply use HtmlPage.Window.Navigate(new Uri("http://www.bing.com")); which works fine. Now my client doesn't like the fact that every click opens up a new browser window. He would like to see the browser window reused every time an item is clicked. So I went out and tried implementing this: Option 1 - Use the overload of the Navigate method, like so: HtmlPage.Window.Navigate(new Uri("http://www.bing.com"), "foo"); I was assuming that the window would be reused when the same target parameter value (foo) would be used in subsequent calls. This does not work. I get a new window every time. Option 2 - Use the PopupWindow method on the HtmlPage HtmlPage.PopupWindow(new Uri("http://www.bing.com"), "blah", new HtmlPopupWindowOptions()); This does not work. I get a new window every time. Option 3 - Get a handle to the opened window and reuse that in subsequent calls private HtmlWindow window; private void navigationButton_Click(object sender, RoutedEventArgs e) { if (window == null) window = HtmlPage.Window.Navigate(new Uri("http://www.bing.com"), "blah"); else window.Navigate(new Uri("http://www.bing.com"), "blah"); if (window == null) MessageBox.Show("it's null"); } This does not work. I tried the same for the PopupWindow() method and the window is null every time, so a new window is opened on every click. I have checked both the EnableHtmlAccess and the IsPopupWindowAllowed properties, and they return true, as they should. Option 4 - Use Eval method to execute some custom javascript private const string javascript = @"var popup = window.open('', 'blah') ; if(popup.location != 'http://www.bing.com' ){ popup.location = 'http://www.bing.com'; } popup.focus();"; private void navigationButton_Click(object sender, RoutedEventArgs e) { HtmlPage.Window.Eval(javascript); } This does not work. I get a new window every time. option 5 - Use CreateInstance to run some custom javascript on the page private void navigationButton_Click(object sender, RoutedEventArgs e) { HtmlPage.Window.CreateInstance("thisIsPlainHell"); } and in my aspx I have function thisIsPlainHell() { var popup = window.open('http://www.bing.com', 'blah'); popup.focus(); } Guess what? This does work. The only thing is that the window behaves a little strange and I'm not sure why: I'm behind a proxy and in all other scenarios I'm being prompted for my password. In this case however I am not (and am thus not able to reach the external site - bing in this case). This is not really a huge issue atm, but I just don't understand what's goign on here. Whenever I type another url in the address bar of the popup window (eg www.google.com) and press enter, it opens up another window and prompts me for my proxy password. As a temporary solution option 5 could do, but I don't like the fact that Silverlight is not able to manage this. One of the main reasons my client has opted for Silverlight is to be protected against all the browser specific hacking that comes with javascript. Am I doing something wrong? I'm definitely no javascript expert, so I'm hoping it's something obvious I'm missing here. Cheers, Phil

    Read the article

  • Build problems when adding `__str__` method to Boost Python C++ class

    - by Rickard
    I have started to play around with boost python a bit and ran into a problem. I tried to expose a C++ class to python which posed no problems. But I can't seem to manage to implement the __str__ functionality for the class without getting build errors I don't understand. I'm using boost 1_42 prebuild by boostpro. I build the library using cmake and the vs2010 compiler. I have a very simple setup. The header-file (tutorial.h) looks like the following: #include <iostream> namespace TestBoostPython{ class TestClass { private: double m_x; public: TestClass(double x); double Get_x() const; void Set_x(double x); }; std::ostream &operator<<(std::ostream &ostr, const TestClass &ts); }; and the corresponding cpp-file looks like: #include <boost/python.hpp> #include "tutorial.h" using namespace TestBoostPython; TestClass::TestClass(double x) { m_x = x; } double TestClass::Get_x() const { return m_x; } void TestClass::Set_x(double x) { m_x = x; } std::ostream &operator<<(std::ostream &ostr, TestClass &ts) { ostr << ts.Get_x() << "\n"; return ostr; } BOOST_PYTHON_MODULE(testme) { using namespace boost::python; class_<TestClass>("TestClass", init<double>()) .add_property("x", &TestClass::Get_x, &TestClass::Set_x) .def(str(self)) ; } The CMakeLists.txt looks like the following: CMAKE_MINIMUM_REQUIRED(VERSION 2.8) project (testme) FIND_PACKAGE( Boost REQUIRED ) FIND_PACKAGE( Boost COMPONENTS python REQUIRED ) FIND_PACKAGE( PythonLibs REQUIRED ) set(Boost_USE_STATIC_LIBS OFF) set(Boost_USE_MULTITHREAD ON) INCLUDE_DIRECTORIES(${Boost_INCLUDE_DIRS}) INCLUDE_DIRECTORIES ( ${PYTHON_INCLUDE_PATH} ) add_library(testme SHARED tutorial.cpp) target_link_libraries(testme ${Boost_PYTHON_LIBRARY}) target_link_libraries(testme ${PYTHON_LIBRARY} The build error I get is the following: Compiling... tutorial.cpp C:\Program Files (x86)\boost\boost_1_42\boost/python/def_visitor.hpp(31) : error C2780: 'void boost::python::api::object_operators::visit(ClassT &,const char *,const boost::python::detail::def_helper &) const' : expects 3 arguments - 1 provided with [ U=boost::python::api::object ] C:\Program Files (x86)\boost\boost_1_42\boost/python/object_core.hpp(203) : see declaration of 'boost::python::api::object_operators::visit' with [ U=boost::python::api::object ] C:\Program Files (x86)\boost\boost_1_42\boost/python/def_visitor.hpp(67) : see reference to function template instantiation 'void boost::python::def_visitor_access::visit,classT>(const V &,classT &)' being compiled with [ DerivedVisitor=boost::python::api::object, classT=boost::python::class_, V=boost::python::def_visitor ] C:\Program Files (x86)\boost\boost_1_42\boost/python/class.hpp(225) : see reference to function template instantiation 'void boost::python::def_visitor::visit>(classT &) const' being compiled with [ DerivedVisitor=boost::python::api::object, W=TestBoostPython::TestClass, classT=boost::python::class_ ] .\tutorial.cpp(29) : see reference to function template instantiation 'boost::python::class_ &boost::python::class_::def(const boost::python::def_visitor &)' being compiled with [ W=TestBoostPython::TestClass, U=boost::python::api::object, DerivedVisitor=boost::python::api::object ] Does anyone have any idea on what went wrrong? If I remove the .def(str(self)) part from the wrapper code, everything compiles fine and the class is usable from python. I'd be very greatful for assistance. Thank you, Rickard

    Read the article

  • ASP.NET 3.5/C# Menu Control in Master Page fails to use CSS styles

    - by Shaun
    I'm working on a web application that uses ASP.NET 3.5 and C#. Structurally, I have a master page with a menu control on it. The control serves as my navigation, and it gets its items from a SiteMapDataSource control and a corresponding Web.sitemap file. The problem is that some styles do not render properly when you specify the CssClass property. More specifically, the selected and hover styles don't respond to css styles. Consider the code below: <%@ Master Language="C#" AutoEventWireup="true" CodeFile="Site.master.cs" Inherits="Site" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.or/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>A webpage</title> </head> <body> <form id="form1" runat="server"> <div id="page"> <asp:Menu ID="navMenu" Orientation="Horizontal" StaticMenuStyle-CssClass="staticMenu" StaticMenuItemStyle-CssClass="staticMenuItem" StaticSelectedStyle-CssClass="staticSelectedItem" StaticHoverStyle-CssClass="staticHoverItem" runat="server"> </asp:Menu> <asp:SiteMapDataSource ID="srcSiteMap" runat="server" ShowStartingNode="false" /> <br /> <asp:ContentPlaceHolder id="ContentPlaceHolder1" runat="server"> </asp:ContentPlaceHolder> </div> </form> </body> </html> Suppose I had a corresponding .css file with the following: .staticMenuItem { background-color:Red; } .staticSelectedItem { background-color:Green; } .staticHoverItem { background-color:Blue; } What will happen is that my item backgrounds will properly be red, but my selected item will not be green and the item I'm hovering my mouse over will not be blue. This seems true regardless of whether or not I include the style in the head of the master page or in an external file in default theme as specified in the web.config file. If I specify the styles in the asp.net xml like so: <asp:Menu ID="navMenu" Orientation="Horizontal" runat="server"> <StaticSelectedStyle BackColor="Green" Font-Underline="True" Font-Bold="True" /> <StaticHoverStyle BackColor="Gray" /> </asp:Menu> It appears to work properly in Firefox, but the style is never embedded in the html in Internet Explorer. Odd. Does anybody have any insight into what is causing this problem and how to neatly work around it? I'm aware I might be able to programmically determine the current page and select the corresponding menu item manually so it receives the proper style class, but before I resort to hacking C# and Javascript together to fix this functionality, I'm open to ideas. Thanks!

    Read the article

  • datagrid height issue in nested datagrid( when using three data grid)

    - by prince23
    hi, i have a nested datagrid(which is of three data grid). here i am able to show data with no issue. the first datagrid has 5 rows the main problem that comes here is when you click any row in first datagrid i show 2 datagrid( which has 10 rows) lets say i click 3 row in 2 data grid. it show further records in third datagrid. again when i click 5 row in 2 data grid it show further records in third datagrid. now all the recods are shown fine when u try to collpase the 3 row in 2 data grid it collpase but the issue is the height what the third datagrid which took space for showing the records( we can see a blank space showing between the main 2 datagrid and third data grid) in every grid first column i have an button where i am writing this code for expand and collpase this is the functionality i am implementing in all the datagrid button where i do expand collpase. hope my question is clear . any help would great private void btn1_Click(object sender, RoutedEventArgs e) { try { Button btnExpandCollapse = sender as Button; Image imgScore = (Image)btnExpandCollapse.FindName("img"); DependencyObject dep = (DependencyObject)e.OriginalSource; while ((dep != null) && !(dep is DataGridRow)) { dep = VisualTreeHelper.GetParent(dep); } // if we found the clicked row if (dep != null && dep is DataGridRow) { DataGridRow row = (DataGridRow)dep; // change the details visibility if (row.DetailsVisibility == Visibility.Collapsed) { imgScore.Source = new BitmapImage(new Uri("/Images/a1.JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Visible; } else { imgScore.Source = new BitmapImage(new Uri("/Images/a2JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Collapsed; } } } catch (System.Exception) { } } --------------------------------------- 2 datagrid private void btn2_Click(object sender, RoutedEventArgs e) { try { Button btnExpandCollapse = sender as Button; Image imgScore = (Image)btnExpandCollapse.FindName("img"); DependencyObject dep = (DependencyObject)e.OriginalSource; while ((dep != null) && !(dep is DataGridRow)) { dep = VisualTreeHelper.GetParent(dep); } // if we found the clicked row if (dep != null && dep is DataGridRow) { DataGridRow row = (DataGridRow)dep; // change the details visibility if (row.DetailsVisibility == Visibility.Collapsed) { imgScore.Source = new BitmapImage(new Uri("/Images/a1.JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Visible; } else { imgScore.Source = new BitmapImage(new Uri("/Images/a2JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Collapsed; } } } catch (System.Exception) { } } ----------------- 3 datagrid private void btn1_Click(object sender, RoutedEventArgs e) { try { Button btnExpandCollapse = sender as Button; Image imgScore = (Image)btnExpandCollapse.FindName("img"); DependencyObject dep = (DependencyObject)e.OriginalSource; while ((dep != null) && !(dep is DataGridRow)) { dep = VisualTreeHelper.GetParent(dep); } // if we found the clicked row if (dep != null && dep is DataGridRow) { DataGridRow row = (DataGridRow)dep; // change the details visibility if (row.DetailsVisibility == Visibility.Collapsed) { imgScore.Source = new BitmapImage(new Uri("/Images/a1.JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Visible; } else { imgScore.Source = new BitmapImage(new Uri("/Images/a2JPG", UriKind.Relative)); row.DetailsVisibility = Visibility.Collapsed; } } } catch (System.Exception) { } }**

    Read the article

  • jQuery DataTables server side processing and ASP.Net

    - by Chad
    I'm trying to use the server side functionality of the jQuery Datatables plugin with ASP.Net. The ajax request is returning valid JSON, but nothing is showing up in the table. I originally had problems with the data I was sending in the ajax request. I was getting a "Invalid JSON primative" error. I discovered that the data needs to be in a string instead of JSON serialized, as described in this post: http://encosia.com/2008/06/05/3-mistakes-to-avoid-when-using-jquery-with-aspnet-ajax/. I wasn't quite sure how to fix that, so I tried adding this in the ajax request: "data": "{'sEcho': '" + aoData.sEcho + "'}" If the aboves eventually works I'll add the other parameters later. Right now I'm just trying to get something to show up in my table. The returning JSON looks ok and validates, but the sEcho in the post is undefined, and I think thats why no data is being loaded into the table. So, what am I doing wrong? Am I even on the right track or am I being stupid? Does anyone ran into this before or have any suggestions? Here's my jQuery: $(document).ready(function() { $("#grid").dataTable({ "bJQueryUI": true, "sPaginationType": "full_numbers", "bServerSide":true, "sAjaxSource": "GridTest.asmx/ServerSideTest", "fnServerData": function(sSource, aoData, fnCallback) { $.ajax({ "type": "POST", "dataType": 'json', "contentType": "application/json; charset=utf-8", "url": sSource, "data": "{'sEcho': '" + aoData.sEcho + "'}", "success": fnCallback }); } }); }); HTML: <table id="grid"> <thead> <tr> <th>Last Name</th> <th>First Name</th> <th>UserID</th> </tr> </thead> <tbody> <tr> <td colspan="5" class="dataTables_empty">Loading data from server</td> </tr> </tbody> </table> Webmethod: <WebMethod()> _ Public Function ServerSideTest() As Data Dim list As New List(Of String) list.Add("testing") list.Add("chad") list.Add("testing") Dim container As New List(Of List(Of String)) container.Add(list) list = New List(Of String) list.Add("testing2") list.Add("chad") list.Add("testing") container.Add(list) HttpContext.Current.Response.ContentType = "application/json" Return New Data(HttpContext.Current.Request("sEcho"), 2, 2, container) End Function Public Class Data Private _iTotalRecords As Integer Private _iTotalDisplayRecords As Integer Private _sEcho As Integer Private _sColumns As String Private _aaData As List(Of List(Of String)) Public Property sEcho() As Integer Get Return _sEcho End Get Set(ByVal value As Integer) _sEcho = value End Set End Property Public Property iTotalRecords() As Integer Get Return _iTotalRecords End Get Set(ByVal value As Integer) _iTotalRecords = value End Set End Property Public Property iTotalDisplayRecords() As Integer Get Return _iTotalDisplayRecords End Get Set(ByVal value As Integer) _iTotalDisplayRecords = value End Set End Property Public Property aaData() As List(Of List(Of String)) Get Return _aaData End Get Set(ByVal value As List(Of List(Of String))) _aaData = value End Set End Property Public Sub New(ByVal sEcho As Integer, ByVal iTotalRecords As Integer, ByVal iTotalDisplayRecords As Integer, ByVal aaData As List(Of List(Of String))) If sEcho <> 0 Then Me.sEcho = sEcho Me.iTotalRecords = iTotalRecords Me.iTotalDisplayRecords = iTotalDisplayRecords Me.aaData = aaData End Sub Returned JSON: {"__type":"Data","sEcho":0,"iTotalRecords":2,"iTotalDisplayRecords":2,"aaData":[["testing","chad","testing"],["testing2","chad","testing"]]}

    Read the article

  • "Access is denied" JavaScript error when trying to access the document object of a programmatically-

    - by Bungle
    I have project in which I need to create an <iframe> element using JavaScript and append it to the DOM. After that, I need to insert some content into the <iframe>. It's a widget that will be embedded in third-party websites. I don't set the "src" attribute of the <iframe> since I don't want to load a page; rather, it is used to isolate/sandbox the content that I insert into it so that I don't run into CSS or JavaScript conflicts with the parent page. I'm using JSONP to load some HTML content from a server and insert it in this <iframe>. I have this working fine, with one serious exception - if the document.domain property is set in the parent page (which it may be in certain environments in which this widget is deployed), Internet Explorer (probably all versions, but I've confirmed in 6, 7, and 8) gives me an "Access is denied" error when I try to access the document object of this <iframe> I've created. It doesn't happen in any other browsers I've tested in (all major modern ones). This makes some sense, since I'm aware that Internet Explorer requires you to set the document.domain of all windows/frames that will communicate with each other to the same value. However, I'm not aware of any way to set this value on a document that I can't access. Is anyone aware of a way to do this - somehow set the document.domain property of this dynamically created <iframe>? Or am I not looking at it from the right angle - is there another way to achieve what I'm going for without running into this problem? I do need to use an <iframe> in any case, as the isolated/sandboxed window is crucial to the functionality of this widget. Here's my test code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <title>Document.domain Test</title> <script type="text/javascript"> document.domain = 'onespot.com'; // set the page's document.domain </script> </head> <body> <p>This is a paragraph above the &lt;iframe&gt;.</p> <div id="placeholder"></div> <p>This is a paragraph below the &lt;iframe&gt;.</p> <script type="text/javascript"> var iframe = document.createElement('iframe'), doc; // create <iframe> element document.getElementById('placeholder').appendChild(iframe); // append <iframe> element to the placeholder element setTimeout(function() { // set a timeout to give browsers a chance to recognize the <iframe> doc = iframe.contentWindow || iframe.contentDocument; // get a handle on the <iframe> document alert(doc); if (doc.document) { // HEREIN LIES THE PROBLEM doc = doc.document; } doc.body.innerHTML = '<h1>Hello!</h1>'; // add an element }, 10); </script> </body> </html> I've hosted it at: http://troy.onespot.com/static/access_denied.html As you'll see if you load this page in IE, at the point that I call alert(), I do have a handle on the window object of the <iframe>; I just can't get any deeper, into its document object. Thanks very much for any help or suggestions! I'll be indebted to whomever can help me find a solution to this.

    Read the article

  • Rails HABTM accepts_nested_attributes_for mapping table

    - by Rabbott
    Currently I have a habtm relationship between a datastream (stream), and a chart. I want to be able to use the 'typical' jquery "add new stream" method to add a new entry into a mapping table that holds the chart_id, and stream_id.. But I think that most examples out there are made to handle a one to many relationship.. The functionality provided here by Ryan Bates is what I'm looking for, But I dont want to use checkboxes, I want to use a select (drop down) to add new items to the mapping table. I need THIS and THIS combined chart.rb class Chart < ActiveRecord::Base has_and_belongs_to_many :streams, :readonly => false, :join_table => 'charts_streams' accepts_nested_attributes_for :streams, :reject_if => lambda { |a| a.values.all?(&:blank?) }, :allow_destroy => true stream.rb class Stream < ActiveRecord::Base has_and_belongs_to_many :charts, :readonly => false, :join_table => 'charts_streams' application.js /* * Method used to add new child form partials */ $('form a.add_child').click(function() { var assoc = $(this).attr('data-association'); var content = $('#' + assoc + '_fields_template').html(); var regexp = new RegExp('new_' + assoc, 'g'); var new_id = new Date().getTime(); var newElements = jQuery(content.replace(regexp, new_id)).hide(); $(this).parent().before(newElements).prev().slideFadeToggle(); return false; }); /* * Method used to remove child form partials */ $('form a.remove_child').live('click', function() { if(confirm('Are you sure?')) { var hidden_field = $(this).prev('input[type=hidden]')[0]; if(hidden_field) { hidden_field.value = '1'; } $(this).parents('.fields').slideFadeToggle(); } return false; }); _form.html.erb <div id="streams"> <h4>Streams</h4> <% form.fields_for :streams do |stream_form| %> <%= render :partial => "stream", :locals => {:f => stream_form} %> <% end %> </div> <p><%= add_child_link 'Add a stream', :streams %></p> <%= new_child_fields_template(form, :streams) %> _stream.html.erb <div class="fields"> <p> <%= f.label :stream_id %><br /> <%= select_tag "chart[stream_ids][]", options_for_select(@streams.map {|s| [s.label, s.id]}, f.object.id) %> </p> <p><%= remove_child_link 'Remove stream', f %></p> </div> This may be overkill and what I am looking to do could be much easier.. Basically when I create a chart I want to be able to add as many streams to it as I want, using the javascript for adding a new entry, and removing an old one... Thanks for the help! This is 'working' but gives me a ActiveRecord::ReadOnlyRecord error when it hits the chart.update_attributes method.. am I adding the the :readonly = false in the wrong spot? Or am I just doing it completely wrong?

    Read the article

  • Get information about AutocompleteTextView from resulting AutoCompleteTextView$DropDownListView

    - by Stev_k
    I'm using 3 AutocompleteTextViews to suggest entries from a database. I subclassed AutocompleteTextView to handle setting the default text to null when clicked and setting back to the default instructions if moved away and nothing is entered. I was using a SimpleCursorAdapter to bind to the view, but I discovered that there was no way I could get the id of the AutocompleteTextView from an OnItemClickListener, which I needed to put additional information from the selected row in a variable depending on which AutocompleteTextView it was from. All I could access was the AutoCompleteTextView$DropDownListView, which is an undocumented inner class that appears to offer no real functionality. Neither was there a way to go up the view hierarchy to get the original AutocompleteTextView. So I subclassed SimpleCursorAdapter and added an int to the constructor to identify which AutocompleteTextView the adapter was from, and I was able to access this from the view passed into OnItemClick(). So, although my solution works fine, I wonder if it is possible to get the id of an AutocompleteTextView from its DropDownListView? I am also using another database query which gets the id from the OnItemClick and then looks up the data for that item, because I couldn't find a way of converting more than one column to a string. Should I be using CursorAdapter for this, to save initiating another query? Oh, and another thing, do I need a database cursor initially (all_cursor) when all I'm doing is filtering on it to get a new cursor? Seems like overkill. Activity .... dbse.openDataBase(); Cursor all_Cursor = dbse.autocomplete_query(); startManagingCursor(all_Cursor); String[] from_all = new String[]{DbAdapter.KEY_NAME}; int[] to_all = new int[] {android.R.id.text1}; from_adapt = new AutocompleteAdapter(FROM_DBADAPTER, this,android.R.layout.simple_dropdown_item_1line, all_Cursor, from_all, to_all); from_adapt.setStringConversionColumn(1); from_adapt.setFilterQueryProvider(this); to_adapt = new AutocompleteAdapter(TO_DBADAPTER, this,android.R.layout.simple_dropdown_item_1line, all_Cursor, from_all, to_all); to_adapt.setStringConversionColumn(1); to_adapt.setFilterQueryProvider(this); from_auto_complete = (Autocomplete) findViewById(R.id.entry_from); from_auto_complete.setAdapter(from_adapt); from_auto_complete.setOnItemClickListener(this); to_auto_complete = (Autocomplete) findViewById(R.id.entry_to); to_auto_complete.setAdapter(to_adapt); to_auto_complete.setOnItemClickListener(this); public void onItemClick (AdapterView<?> parent, View view, int position, long id) { Cursor selected_row_cursor = dbse.data_from_id(id); selected_row_cursor.moveToFirst(); String lat = selected_row_cursor.getString(1); String lon = selected_row_cursor.getString(2); int source = ((AutocompleteAdapter) parent.getAdapter()).getSource(); Autocomplete class: public class Autocomplete extends AutoCompleteTextView implements OnTouchListener,OnFocusChangeListener{ String textcontent; Context mycontext = null; int viewid = this.getId(); public Autocomplete(Context context, AttributeSet attrs) { super(context, attrs); textcontent = this.getText().toString(); mycontext = context; this.setOnFocusChangeListener(this); this.setOnTouchListener(this); } public boolean onTouch(View v, MotionEvent event) { if (textcontent.equals(mycontext.getString(R.string.from_textbox)) | textcontent.equals(mycontext.getString(R.string.to_textbox)) | textcontent.equals(mycontext.getString(R.string.via_textbox))) { this.setText(""); } return false; } public void onFocusChange(View v, boolean hasFocus) { if (hasFocus == false) { int a = this.getText().length(); if (a == 0){ if (viewid == R.id.entry_from) {this.setText(R.string.from_textbox);} if (viewid == R.id.entry_to) {this.setText(R.string.to_textbox);} if (viewid == R.id.entry_via) {this.setText(R.string.via_textbox);} } } } } Adapter: public class AutocompleteAdapter extends SimpleCursorAdapter { int source; public AutocompleteAdapter(int query_source, Context context, int layout, Cursor c, String[] from, int[] to) { super(context, layout, c, from, to); source = query_source; } public int getSource() { return source; } } sorry that's a lot of code! Thanks for your help. Stephen

    Read the article

  • How to configure a GridView CommandField to trigger Full Page Update in UpdatePanel

    - by Frinavale
    I have 2 user controls on my page. One is used for searching, and the other is used for editing (along with a few other things). The user control that provides the search functionality uses a GridView to display the search results. This GridView has a CommandField used for editing (showEditButton="true"). I would like to place the GridView into an UpdatePanel so that paging through the search results will be smooth. The thing is that when the user clicks the Edit Link (the CommandField) I need to preform a full page postback so that the search user control can be hidden and the edit user control can be displayed. Edit: the reason I need to do a full page postback is because the edit user control is outside of the UpdatePanel that my GridView is in. It is not only outside the UpdatePanel, but it's in a completely different user control. I have no idea how to add the CommandField as a full page postback trigger to the UpdatePanel. The PostBackTrigger (which is used to indicate the controls cause a full page postback) takes a ControlID as a parameter; however the CommandButton does not have an ID...and you can see why I'm having a problem with this. Update on what else I've tried to solve the problem: I took a new approach to solving the problem. In my new approach, I used a TemplateField instead of a CommandField. I placed a LinkButton control in the TemplateField and gave it a name. During the GridView's RowDataBound event I retrieved the LinkButton control and added it to the UpdatePanel's Triggers. This is the ASP Markup for the UpdatePanel and the GridView <asp:UpdatePanel ID="SearchResultsUpdateSection" runat="server"> <ContentTemplate> <asp:GridView ID="SearchResultsGrid" runat="server" AllowPaging="true" AutoGenerateColumns="false"> <Columns> <asp:TemplateField> <HeaderTemplate></HeaderTemplate> <ItemTemplate> <asp:LinkButton ID="Edit" runat="server" Text="Edit"></asp:LinkButton> </ItemTemplate> </asp:TemplateField> <asp:BoundField ...... </Columns> </asp:GridView> </ContentTemplate> </asp:UpdatePanel> In my VB.NET code. I implemented a function that handles the GridView's RowDataBound event. In this method I find the LinkButton for the row being bound to, create a PostBackTrigger for the LinkButton, and add it to the UpdatePanel's Triggers. This means that a PostBackTrigger is created for every "edit" LinkButton in the GridView Edit: this did not create a PostBackTrigger for Every "edit" LinkButton because the ID is the same for all LinkButtons in the GridView. Private Sub SearchResultsGrid_RowDataBound(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewRowEventArgs) Handles SearchResultsGrod.RowDataBound If e.Row.RowType = DataControlRowType.Header Then ''I am doing stuff here that does not pertain to the problem Else Dim editLink As LinkButton = CType(e.Row.FindControl("Edit"), LinkButton) If editLink IsNot Nothing Then Dim fullPageTrigger As New PostBackTrigger fullPageTrigger.ControlID = editLink.ID SearchResultsUpdateSection.Triggers.Add(fullPageTrigger) End If End If End Sub And instead of handling the GridView's RowEditing event for editing purposes i use the RowCommand instead. Private Sub SearchResultsGrid_RowCommand(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewCommandEventArgs) Handles SearchResultsGrid.RowCommand RaiseEvent EditRecord(Me, New EventArgs()) End Sub This new approach didn't work at all because all of the LinkButtons in the GridView have the same ID. After reading the MSDN article on the UpdatePanel.Triggers Property I have the impression that Triggers can only be defined declaratively. This would mean that anything I did in the VB code wouldn't work. Any advise would be greatly appreciated. Thanks, -Frinny

    Read the article

  • How do I get Visual Studio build process to pass condition to library(external) project

    - by Billy Talented
    I have tried several solutions for this problem. Enough to know I do not know enough about MSBuild to do this elegantly but I feel like there should be a method. I have a set of libraries for working with .net projects. A few of the projects utilize System.Web.Mvc - which recently released Version 2 - and we are looking forward to the upgrade. Currently sites which reference this library reference it directly by the project(csproj) on the developer's computer - not a built version of the library so that changes and source code case easily be viewed when dealing code from this library. This works quite well and would prefer to not have to switch to binary references (but will if this is the only solution). The problem I have is that because of some of the functionality that was added onto the MVC1 based library (view engines, model binders etc) several of the sites reliant on these libraries need to stay on MVC1 until we have full evaluated and tested them on MVC2. I would prefer to not have to fork or have two copies on each dev machine. So what I would like to be able to do is set a property group value in the referencing web application and have this read by the above mentions library with the caviat that when working directly on the library via its containing solution I would like to be able to control this via Configuration Manager by selecting a build type and that property overriding the build behavior of the solution (i.e. 'Debug - MVC1' vs 'Debug -MVC2') - I have this working via: <Choose> <When Condition=" '$(Configuration)|$(Platform)' == 'Release - MVC2|AnyCPU' Or '$(Configuration)|$(Platform)' == 'Debug - MVC2|AnyCPU'"> <ItemGroup> <Reference Include="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL"> </Reference> <Reference Include="Microsoft.Web.Mvc, Version=1.0.0.0, Culture=neutral, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\Dependancies\Web\MVC2\Microsoft.Web.Mvc.dll</HintPath> </Reference> </ItemGroup> </When> <Otherwise> <ItemGroup> <Reference Include="System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\Dependancies\Web\MVC\System.Web.Mvc.dll</HintPath> </Reference> </ItemGroup> </Otherwise> The item that I am struggling with is the cross solution issue(solution TheWebsite references this project and needs to control which build property to use) that I have not found a way to work with that I think is a solid solution that enabled the build within visual studio to work as it has to date. Other bits: we are using VS2008, Resharper, TeamCity for CI, SVN for source control.

    Read the article

  • Speeding up a search .net 4.0

    - by user231465
    Wondering if I can speed up the search. I need to build a functionality that has to be used by many UI screens The one I have got works but I need to make sure I am implementing a fast algoritim if you like It's like an incremental search. User types a word to search for eg const string searchFor = "Guinea"; const char nextLetter = ' ' It looks in the list and returns 2 records "Guinea and Guinea Bissau " User types a word to search for eg const string searchFor = "Gu"; const char nextLetter = 'i' returns 3 results. This is the function but I would like to speed it up. Is there a pattern for this kind of search? class Program { static void Main() { //Find all countries that begin with string + a possible letter added to it //const string searchFor = "Guinea"; //const char nextLetter = ' '; //returns 2 results const string searchFor = "Gu"; const char nextLetter = 'i'; List<string> result = FindPossibleMatches(searchFor, nextLetter); result.ForEach(x=>Console.WriteLine(x)); //returns 3 results Console.Read(); } /// <summary> /// Find all possible matches /// </summary> /// <param name="searchFor">string to search for</param> /// <param name="nextLetter">pretend user as just typed a letter</param> /// <returns></returns> public static List<string> FindPossibleMatches (string searchFor, char nextLetter) { var hashedCountryList = new HashSet<string>(CountriesList()); var result=new List<string>(); IEnumerable<string> tempCountryList = hashedCountryList.Where(x => x.StartsWith(searchFor)); foreach (string item in tempCountryList) { string tempSearchItem; if (nextLetter == ' ') { tempSearchItem = searchFor; } else { tempSearchItem = searchFor + nextLetter; } if(item.StartsWith(tempSearchItem)) { result.Add(item); } } return result; } /// <summary> /// Returns list of countries. /// </summary> public static string[] CountriesList() { return new[] { "Afghanistan", "Albania", "Algeria", "American Samoa", "Andorra", "Angola", "Anguilla", "Antarctica", "Antigua And Barbuda", "Argentina", "Armenia", "Aruba", "Australia", "Austria", "Azerbaijan", "Bahamas", "Bahrain", "Bangladesh", "Barbados", "Belarus", "Belgium", "Belize", "Benin", "Bermuda", "Bhutan", "Bolivia", "Bosnia Hercegovina", "Botswana", "Bouvet Island", "Brazil", "Brunei Darussalam", "Bulgaria", "Burkina Faso", "Burundi", "Byelorussian SSR", "Cambodia", "Cameroon", "Canada", "Cape Verde", "Cayman Islands", "Central African Republic", "Chad", "Chile", "China", "Christmas Island", "Cocos (Keeling) Islands", "Colombia", "Comoros", "Congo", "Cook Islands", "Costa Rica", "Cote D'Ivoire", "Croatia", "Cuba", "Cyprus", "Czech Republic", "Czechoslovakia", "Denmark", "Djibouti", "Dominica", "Dominican Republic", "East Timor", "Ecuador", "Egypt", "El Salvador", "England", "Equatorial Guinea", "Eritrea", "Estonia", "Ethiopia", "Falkland Islands", "Faroe Islands", "Fiji", "Finland", "France", "Gabon", "Gambia", "Georgia", "Germany", "Ghana", "Gibraltar", "Great Britain", "Greece", "Greenland", "Grenada", "Guadeloupe", "Guam", "Guatemela", "Guernsey", "Guiana", "Guinea", "Guinea Bissau", "Guyana", "Haiti", "Heard Islands", "Honduras", "Hong Kong", "Hungary", "Iceland", "India", "Indonesia", "Iran", "Iraq", "Ireland", "Isle Of Man", "Israel", "Italy", "Jamaica", "Japan", "Jersey", "Jordan", "Kazakhstan", "Kenya", "Kiribati", "Korea, South", "Korea, North", "Kuwait", "Kyrgyzstan", "Lao People's Dem. Rep.", "Latvia", "Lebanon", "Lesotho", "Liberia", "Libya", "Liechtenstein", "Lithuania", "Luxembourg", "Macau", "Macedonia", "Madagascar", "Malawi", "Malaysia", "Maldives", "Mali", "Malta", "Mariana Islands", "Marshall Islands", "Martinique", "Mauritania", "Mauritius", "Mayotte", "Mexico", "Micronesia", "Moldova", "Monaco", "Mongolia", "Montserrat", "Morocco", "Mozambique", "Myanmar", "Namibia", "Nauru", "Nepal", "Netherlands", "Netherlands Antilles", "Neutral Zone", "New Caledonia", "New Zealand", "Nicaragua", "Niger", "Nigeria", "Niue", "Norfolk Island", "Northern Ireland", "Norway", "Oman", "Pakistan", "Palau", "Panama", "Papua New Guinea", "Paraguay", "Peru", "Philippines", "Pitcairn", "Poland", "Polynesia", "Portugal", "Puerto Rico", "Qatar", "Reunion", "Romania", "Russian Federation", "Rwanda", "Saint Helena", "Saint Kitts", "Saint Lucia", "Saint Pierre", "Saint Vincent", "Samoa", "San Marino", "Sao Tome and Principe", "Saudi Arabia", "Scotland", "Senegal", "Seychelles", "Sierra Leone", "Singapore", "Slovakia", "Slovenia", "Solomon Islands", "Somalia", "South Africa", "South Georgia", "Spain", "Sri Lanka", "Sudan", "Suriname", "Svalbard", "Swaziland", "Sweden", "Switzerland", "Syrian Arab Republic", "Taiwan", "Tajikista", "Tanzania", "Thailand", "Togo", "Tokelau", "Tonga", "Trinidad and Tobago", "Tunisia", "Turkey", "Turkmenistan", "Turks and Caicos Islands", "Tuvalu", "Uganda", "Ukraine", "United Arab Emirates", "United Kingdom", "United States", "Uruguay", "Uzbekistan", "Vanuatu", "Vatican City State", "Venezuela", "Vietnam", "Virgin Islands", "Wales", "Western Sahara", "Yemen", "Yugoslavia", "Zaire", "Zambia", "Zimbabwe" }; } } } Any suggestions? Thanks

    Read the article

  • Partial view links not working in Fire Fox

    - by user329540
    I have a MVC4 asp.net application, I have two layouts a main layout for the main page and a second layout for the nested pages. The problem I have is with the second layout, on this layout I call a partial view which has my navigation links. In IE the navigation menu displays fine and when each item is clicked it navigates as expected. However in FF when the page renders the navigation bar is displayed but it has no 'click functionality' if you will its as if its simply text. My layout of nested page: <header> <img src="../../Images/fronttop.png" id="nestedPageheader" alt="Background Img"/> <div class="content-wrapper"> <section > <nav> <div id="navcontainer"> </div> </nav> </section> <div> </header> The script to retreive partial view and information for dynamic links on layout page. <script type="text/javascript"> var menuLoaded = false; $(document).ready(function () { if($('#navcontainer')[0].innerHTML.trim() == "") { $.ajax({ url: "@Url.Content("~/Home/MenuLayout")", type: "GET", success: function (response, status, xhr) { var nvContainer = $('#navcontainer'); nvContainer.html(response); menuLoaded = true; }, error: function (XMLHttpRequest, textStatus, errorThrown) { var nvContainer = $('#navcontainer'); nvContainer.html(errorThrown); } }); } }); </script> May partial view: @model Mscl.OpCost.Web.Models.stuffmodel <div class="menu"> <ul> <li><a>@Html.ActionLink("Home", "Index", "Home")</a></li> <li><a>@Html.ActionLink("some stuff", "stuffs", "stuff")</a></li> <li> <h5><a><span>somestuff</span></a></h5> <ul> <li><a>stuffs1s</a> <ul> @foreach (var image in Model.stuffs.Where(g => g.Grouping == 1)) { <li> <a>@Html.ActionLink(image.Title, "stuffs", "stuff", new { Id = image.CategoryId }, null)</a> </li> } </ul> </li> </ul> </il> </ul> </div> I need to know why this works fine in IE but why its not working in FF(all versions). Any assistance would be appreciated.

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >