Search Results

Search found 32342 results on 1294 pages for 'access insurance'.

Page 406/1294 | < Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >

  • How can I retrieve "remembered"(stored) wi-fi password from a win. 7 device?

    - by user180880
    I have access to PC, and I am a standard user. Everything(incl. "show charecters" tickbox at wiriless menu) requires admin access. Now, that said machine is actually like a big tv with touch. Type-stuff is handled by virtual keyboard of windows. I can reach to c:\ProgramData\Microsoft\Wlansvc\Profiles\Interfaces and can see-open files there, which is I assume where passwords stored are. Now the problem is that these passwords is encypted. I'm ok with if there is a way with changing/resetting admin password as well. Considering this device has nothing but massive amounts of usb(yep, not even cd-dvd // rw) the only way is from inside or with usb without admin rights.

    Read the article

  • Why is the word PERSONAL still relevant in the term PC? [closed]

    - by Bill
    I have spent half an hour trying to change an icon on my Win-7-64 machine (Why Can't I Change the Icon). One reasonable suggestion (reasonable in terms of having a solution, not reasonable in terms of having to jump through these hoops for such a basic requirement) was to delete the old icon from the %userprofile% \ Local Settings..., however when I click on this folder in Windows Explorer I am told the folder is not accessible - Access Denied. Well! It's my PERSONAL computer isn't it? Isn't that what PC stands for? It's MY computer - why can't I get access to that folder? It's about time we started calling these machines MCs (Microsoft Computer), or WCs (Windows Computer) - because they sure as hell aint PERSONAL damn computers!!!!

    Read the article

  • Why can I browse to localhost, not to my computer name? (IIS7) [closed]

    - by Lost Hobbit
    I'm not very clued up on IIS, but I'm trying to do something that I thought would be quite simple. In IE, if I browse to http://localhost:80, I am greeted with a pretty picture with a bunch of welcome messages and a big "IIS7", thanks to the graphic designers at Microsoft. In IE, if I browse to http://mycomputername:80, I'm greeted with 404. It may be my fault... perhaps I've done something weird. Chrome replies 404 to either of those. Should this work, and if so, what am I doing wrong, or what can I do to get it to work? What might cause this to happen and how can I fix it? EDIT To add a bit more information. I did find after posting this question that http://localhost:80 was the only URL I could access on my local PC. I could not access any of the virtual sub-directories on localhost via my browser.

    Read the article

  • Why is this server redirecting to another page???

    - by Mike L.
    I am building a site for a client. For a reason unknown to me www.domain.com forwards to www.domain.com/directory/home.html. If i type www.domain.com/index.php it works correctly. I have checked .htaccess there was nothing there, so I set the index to index.php which works fine in every directory other than the root directory. I have root access and have checked the httpd.conf (did a search in VI for the document that I was being redirected to) and anything else I could think of. Where should I look next? The server is a VPS running CentOS 5.5 with multiple domains, has CPanel WHM 11 for root access and CPanel X installed for each domain.

    Read the article

  • How to configure permissions for win2008 task running as Network Service to stop/start a service on different system?

    - by weiji
    Well... title says it all, actually. We've got a Scheduled Task set up on a windows server 2003 box running as the Network Service, and the batch file it runs will invoke "sc" to stop and then start a service on another windows box, however sc reports: [SC] OpenService FAILED 5: Access is denied. Running the same batch file via the windows explorer has no issues, and my user account is part of the Administrators group so I believe this is why there are no issues when I try it manually. Is this a permissions thing I enable for Network Service on the first server? Or do I enable permissions for Network Service somehow on the target server? This question (http://serverfault.com/questions/19382/why-sc-query-fails-from-one-machine-but-works-from-another) touches on something similar, but I'm looking for enabling the Network Service to access the service via the scheduled task.

    Read the article

  • What are my options for a secure External File Share in Server 2008 R2?

    - by Nitax
    Hi, I have a Windows Server 2008 R2 machine installed on a home network with a number of files that need to be shared in a few different scenarios. I would like for all three scenarios to have a solution with some sort of encyption to protect the data during transfer. Scenario 1: I need to access files from my laptop (Mac OSX) or another computer outside of the network. This option seems like the easy one to answer in that I could use LogMeIn, the windows VPN, etc. to create such a connection. Scenario 2: I need to provide access to another user with minimal installation / configuration on his or her end. This makes me think of the new FTP 7.5 provided with Server 2008 R2 but i'm not sure of the details: Does it support SSH or some other form of encryption?, can an OSX user connect?, etc. My question here is what are my options? I really just don't know where to get started...

    Read the article

  • iptable rules not blocking

    - by psychok7
    so i am trying to allow ssh access to a certain range of ips (from 192.168.1.1 to 192.168.1.24) and block all the rest but since i am new to iptables i can't seem to figure, i have : iptables -A INPUT -s 192.168.1.0/24 -p udp --dport ssh -j ACCEPT iptables -A INPUT -s 192.168.1.0/24 -p tcp --dport ssh -j ACCEPT iptables -A INPUT -p tcp --dport ssh -j REJECT iptables -A INPUT -p udp --dport ssh -j REJECT but this does not work, with a vm set with 192.168.1.89 i can still access through ssh. can someone help?

    Read the article

  • Apache server, PHP

    - by user65649
    I am running a php site on my apache server (Mac). I am having trouble displaying images on the site when I access it externally or from another computer on the same server. If I try to access the image directly. website.com/image.jpg I get a broken link icon and can't display the image. Any ideas what could cause this? My images are embedded using a style.css file. background-image:url(image.jpg);

    Read the article

  • Ubuntu 9.10 and Squid 2.7 Transparent Proxy TCP_DENIED

    - by user298814
    Hi, We've spent the last two days trying to get squid 2.7 to work with ubuntu 9.10. The computer running ubuntu has two network interfaces: eth0 and eth1 with dhcp running on eth1. Both interfaces have static ip's, eth0 is connected to the Internet and eth1 is connected to our LAN. We have followed literally dozens of different tutorials with no success. The tutorial here was the last one we did that actually got us some sort of results: http://www.basicconfig.com/linuxnetwork/setup_ubuntu_squid_proxy_server_beginner_guide. When we try to access a site like seriouswheels.com from the LAN we get the following message on the client machine: ERROR The requested URL could not be retrieved Invalid Request error was encountered while trying to process the request: GET / HTTP/1.1 Host: www.seriouswheels.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.307.11 Safari/532.9 Cache-Control: max-age=0 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Accept-Encoding: gzip,deflate,sdch Cookie: __utmz=88947353.1269218405.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); __qca=P0-1052556952-1269218405250; __utma=88947353.1027590811.1269218405.1269218405.1269218405.1; __qseg=Q_D Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Some possible problems are: Missing or unknown request method. Missing URL. Missing HTTP Identifier (HTTP/1.0). Request is too large. Content-Length missing for POST or PUT requests. Illegal character in hostname; underscores are not allowed. Your cache administrator is webmaster. Below are all the configuration files: /etc/squid/squid.conf, /etc/network/if-up.d/00-firewall, /etc/network/interfaces, /var/log/squid/access.log. Something somewhere is wrong but we cannot figure out where. Our end goal for all of this is the superimpose content onto every page that a client requests on the LAN. We've been told that squid is the way to do this but at this point in the game we are just trying to get squid setup correctly as our proxy. Thanks in advance. squid.conf acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl localnet src 192.168.0.0/24 acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow localnet http_access deny all icp_access allow localnet icp_access deny all http_port 3128 hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid/cache1 1000 16 256 access_log /var/log/squid/access.log squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern (Release|Package(.gz)*)$ 0 20% 2880 refresh_pattern . 0 20% 4320 acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9] upgrade_http0.9 deny shoutcast acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT cache_mgr webmaster cache_effective_user proxy cache_effective_group proxy hosts_file /etc/hosts coredump_dir /var/spool/squid access.log 1269243042.740 0 192.168.1.11 TCP_DENIED/400 2576 GET NONE:// - NONE/- text/html 00-firewall iptables -F iptables -t nat -F iptables -t mangle -F iptables -X echo 1 | tee /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -j MASQUERADE iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3128 networking auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 142.104.109.179 netmask 255.255.224.0 gateway 142.104.127.254 auto eth1 iface eth1 inet static address 192.168.1.100 netmask 255.255.255.0

    Read the article

  • Help with SVN+SSH permissions with CentOS/WHM setup

    - by Furiam
    Hi Folks, I'll try my best to explain how I'm trying to set up this system. Imagine a production server running WHM with various sites. We'll call these sites... site1, site2, site2 Now, with the WHM setup, each site has a user/group defined for them, we'll keep these users/groups called site1,site2 for simplicity reasons. Now, updating these sites is accomplished using SVN, and through the use of a post commit script to auto update these sites (With .svn blocked through the apache configuration). There are two regular maintainers of these sites, we'll call them Joe and Bob. Joe and Bob both have commandline access to the server through thier respective limited accounts. So I've done the easy bit, managed to get SVN working with these "maintainers" so that when an SVN commit occurs, the changes are checked out and go live perfectly. Here's the cavet, and ultimately my problem. User permissions. Through my testing of this setup, I've only managed to get it working by giving what is being updated permissions of 777, so that Joe and Bob can both read and write access to webfront directories for each of the sites. So, an example of how it's set up now: Joe and Bob both belong to a group called "Dev". I have the master /svn folders set up for both read and write access to this group, and it works great. Post commit triggers, updates the site, and then sets 777 on each file within the webfront. I then changed this to try and factor in group permission updates, instead of straight 777. Each folder in /home/site1/public_html intially gets given a chmod of 664, and each folder 775 Which looks a little something like this drwxrwxr-x . drwxrwxr-x .. drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- site1 site1 my_test_file So site1 is sthe owner and group owner of those files and folders. So I then added site1 to Joe and Bobs secondary groups so that the SVN update will correctly allow access to these files. Herein lies the problem now. When I wish to add a file or folder to /home/site1, say Bobs_file, it then looks like this drwxrwxr-x . drwxrwxr-x .. drwxr-xr-x Bob dev bobs_folder drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- Bob dev bobs_file -rw-rw-r-- site1 site1 my_test_file How can I get it so that with the set of user permissions Bob has available, to change the owner and group owner of that file to reflect "site1" "site1". As Bob belongs to Dev I can set the permissions correctly with CHMOd, but It appears CHGRP is throwing back operation errors. Now this was long winded enough to give an overview of exactly what I'm trying to accomplish, just incase I'm going about this arse-over-tit and there's a far easier solution. Here's my goals 2 people to update multiple user accounts specified given the structure of WHM Trying to maintain master user/group permissions of file and folders to the original user account, and not the account of the updatee. I like the security of SVN+SSH over just SVN. Don't want to run all this over root. I hope this made sense, and thanks in advance :)

    Read the article

  • Apache mod_rewrite driving me mad

    - by WishCow
    The scenario I have a webhost that is shared among multiple sites, the directory layout looks like this: siteA/ - css/ - js/ - index.php siteB/ - css/ - js/ - index.php siteC/ . . . The DocumentRoot is at the top level, so, to access siteA, you type http://webhost/siteA in your browser, to access siteB, you type http://webhost/siteB, and so on. Now I have to deploy my own site, which was designed with having 3 VirtualHosts in mind, so my structure looks like this: siteD/ - sites/sitename.com/ - log/ - htdocs/ - index.php - sites/static.sitename.com - log/ - htdocs/ - css - js - sites/admin.sitename.com - log/ - htdocs/ - index.php As you see, the problem is that my index.php files are not at the top level directory, unlike the already existing sites on the webhost. Each VirtualHost should point to the corresponding htdocs/ folder: http://siteD.com -> siteD/sites/sitename.com/htdocs http://static.siteD.com -> siteD/sites/static.sitename.com/htdocs http://admin.siteD.com -> siteD/sites/admin.sitename.com/htdocs The problem I cannot have VirtualHosts on this host, so I have to emulate it somehow, possibly with mod_rewrite. The idea Have some predefined parts in all of the links on the site, that I can identify, and route accordingly to the correct file, with mod_rewrite. Examples: http://webhost/siteD/static/js/something.js -> siteD/sites/static.sitename.com/htdocs/js/something.js http://webhost/siteD/static/css/something.css -> siteD/sites/static.sitename.com/htdocs/css/something.css http://webhost/siteD/admin/something -> siteD/sites/admin.sitename.com/htdocs/index.php http://webhost/siteD/admin/sub/something -> siteD/sites/admin.sitename.com/htdocs/index.php http://webhost/siteD/something -> siteD/sites/sitename.com/htdocs/index.php http://webhost/siteD/sub/something -> siteD/sites/sitename.com/htdocs/index.php Anything that starts with http://url/sitename/admin/(.*) will get rewritten, to point to siteD/sites/admin.sitename.com/htdocs/index.php Anything that starts with http://url/sitename/static/(.*) will get rewritten, to point to siteD/sites/static.sitename.com/htdocs/$1 Anything that starts with http://url/sitename/(.*) AND did not have a match already from above, will get rewritten to point to siteD/sites/sitename.com/htdocs/index.php The solution Here is the .htaccess file that I've come up with: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^/siteD/static/(.*)$ [NC] RewriteRule ^siteD/static/(.*)$ siteD/sites/static/htdocs/$1 [L] RewriteCond %{REQUEST_URI} ^/siteD/admin/(.*)$ [NC] RewriteRule ^siteD/(.*)$ siteD/sites/admin/htdocs/index.php [L,QSA] So far, so good. It's all working. Now to add the last rule: RewriteCond %{REQUEST_URI} ^/siteD/(.*)$ [NC] RewriteRule ^siteD/(.*)$ siteD/sites/public/htdocs/index.php [L,QSA] And it's broken. The last rule catches everything, even the ones that have static/ or admin/ in them. Why? Shouldn't the [L] flag stop the rewriting process in the first two cases? Why is the third case evaluated? Is there a better way of solving this? I'm not sticking to rewritemod, anything is fine as long as it does not need access to server-level config. I don't have access to RewriteLog, or anything like that. Please help :(

    Read the article

  • just can't get a controller to work

    - by Asaf
    I try to get into mysite/user so that application/classes/controller/user.php should be working, now this is my file tree: code of controller/user.php: <?php defined('SYSPATH') OR die('No direct access allowed.'); class Controller_User extends Controller_Default { public $template = 'user'; function action_index() { //$view = View::factory('user'); //$view->render(TRUE); $this->template->message = 'hello, world!'; } } ?> code of controller/default.php: <?php defined('SYSPATH') OR die('No direct access allowed.'); class Controller_default extends Controller_Template { } bootstrap.php: <?php defined('SYSPATH') or die('No direct script access.'); //-- Environment setup -------------------------------------------------------- /** * Set the default time zone. * * @see http://kohanaframework.org/guide/using.configuration * @see http://php.net/timezones */ date_default_timezone_set('America/Chicago'); /** * Set the default locale. * * @see http://kohanaframework.org/guide/using.configuration * @see http://php.net/setlocale */ setlocale(LC_ALL, 'en_US.utf-8'); /** * Enable the Kohana auto-loader. * * @see http://kohanaframework.org/guide/using.autoloading * @see http://php.net/spl_autoload_register */ spl_autoload_register(array('Kohana', 'auto_load')); /** * Enable the Kohana auto-loader for unserialization. * * @see http://php.net/spl_autoload_call * @see http://php.net/manual/var.configuration.php#unserialize-callback-func */ ini_set('unserialize_callback_func', 'spl_autoload_call'); //-- Configuration and initialization ----------------------------------------- /** * Initialize Kohana, setting the default options. * * The following options are available: * * - string base_url path, and optionally domain, of your application NULL * - string index_file name of your index file, usually "index.php" index.php * - string charset internal character set used for input and output utf-8 * - string cache_dir set the internal cache directory APPPATH/cache * - boolean errors enable or disable error handling TRUE * - boolean profile enable or disable internal profiling TRUE * - boolean caching enable or disable internal caching FALSE */ Kohana::init(array( 'base_url' => '/mysite/', 'index_file' => FALSE, )); /** * Attach the file write to logging. Multiple writers are supported. */ Kohana::$log->attach(new Kohana_Log_File(APPPATH.'logs')); /** * Attach a file reader to config. Multiple readers are supported. */ Kohana::$config->attach(new Kohana_Config_File); /** * Enable modules. Modules are referenced by a relative or absolute path. */ Kohana::modules(array( 'auth' => MODPATH.'auth', // Basic authentication 'cache' => MODPATH.'cache', // Caching with multiple backends 'codebench' => MODPATH.'codebench', // Benchmarking tool 'database' => MODPATH.'database', // Database access 'image' => MODPATH.'image', // Image manipulation 'orm' => MODPATH.'orm', // Object Relationship Mapping 'pagination' => MODPATH.'pagination', // Paging of results 'userguide' => MODPATH.'userguide', // User guide and API documentation )); /** * Set the routes. Each route must have a minimum of a name, a URI and a set of * defaults for the URI. */ Route::set('default', '(<controller>(/<action>(/<id>)))') ->defaults(array( 'controller' => 'welcome', 'action' => 'index', )); /** * Execute the main request. A source of the URI can be passed, eg: $_SERVER['PATH_INFO']. * If no source is specified, the URI will be automatically detected. */ echo Request::instance() ->execute() ->send_headers() ->response; ?> .htaccess: RewriteEngine On RewriteBase /mysite/ RewriteRule ^(application|modules|system) - [F,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php/$0 [PT,L] Trying to go to http://localhost/ makes the "hello world" page, from the welcome.php Trying to go to http://localhost/mysite/user give me this: The requested URL /mysite/user was not found on this server.

    Read the article

  • General website publishing questions involving domain forwarding issue

    - by Gorgeousyousuf
    Even though I have been having a certain level of knowledge and experience about web development I have never interested in obtaining a domain and publishing a website from my own server. Since today I have been struggling with getting my own domain and configuring it utilizing web sources. I started with learning the outline of web publishing process including web server installation, deploying a website for testing purpose,router port forwarding, getting a domain and forwarding domain to my router which will also forward http requests to my web server I am confused about some parts and so far could not get the web site accessed from outside of the network. All I try to do is just for learning purpose so I do not pay much attention to security issues for now. I have Server 2008 and IIS 7.5 installed. I use a laptop and have access to the modem over wireless and my modem is Zoom x6 5590. Well I will continue explaining what I have done so far and what I think will be after each action I did, I have successfully had access to my website on any local computer entering the internal ip address and port pair of the host machine in a browser. Next, I forwarded port 80 of my host machine creating a virtual server like 10.0.0.x(internal ip(static) of the host) - tcp - start port : 80 - end port : 80 in router options. Now I suppose every request that will come to the public Ip on port 80 will be forwarded to my host machine(10.0.0.x) over port 80. So If everyhing went as desired, the website listening on port 80 will accept the request and process the issue and finally respond bla bla bla... I suppose to access my website from outside of the network by entering http://MyPublicIp:80 in a browser but I couldn't accomplish this task by now despite using godady's domain forwarding tool,I see a small view of my website when I click the "preview" button that checks whether the address(http://publicip/Index.aspx) I entered where my domain will be forwarded is available or not. I am sure that configuring domain does not play a role in solving such a problem since using public ip and port matching does not help. So here is the first question, What is the fact that I face this problem? After that, I have couple of question regarding domain forwarding using godaddy tool. Can I forward my domain to a any port for example port 8080 other than default http port 80? Additionally, can I use a sub-domain to forward to a different port of the host? What I want to design is if the client enters www.mydomain.com, website1 will respond over a specified port and after when a client enters info.mydomain.com, another website which listens on different port will respond. I tried to add a sub-domain and forward it to a address like http://www.mydomain.com:8080/Index.aspx with no success. Can I really do that? Finally, what if I have a ftp site listening on the default port 21 and I create a domain like ftp.mydomain.com that will forward to that ftp site address. Is it possible to use sub-domains for ftp site access? I know I am more than confused but no matter whatever and however you reply to me, you will help me have a more clear view on this subject. Thank you very much from now.

    Read the article

  • How do you setup an gsp snippet in grails and with spring-security-core?

    - by egervari
    Hi, I have a block of gsp I'd like to reuse and make into a little gsp snippet: <sec:ifLoggedIn> <g:link controller="user" action="showProfile">My Profile</g:link> | <g:link controller="privateMessage" action="list">Inbox</g:link> | <g:link controller="user" action="showPreferences">Preferences</g:link> | <g:link controller="logout" action="index">Logout</g:link> </sec:ifLoggedIn> <sec:ifNotLoggedIn> <form id="loginForm" action="/myproject/j_spring_security_check" method="POST"> <fieldset> <input type='text' name='j_username' id='username' size="15" /> <input type='password' name='j_password' id='password' size="15" /> <input type="submit" value="Login" class="button" /> <a href="#">Register</a> </fieldset> </form> </sec:ifNotLoggedIn> I have learned that I can use g:render template="_loginStuff" to merge the template in with the rest of the markup. However, doing so with Spring Security results in an error: java.lang.NullPointerException at org.codehaus.groovy.grails.plugins.springsecurity.AnnotationFilterInvocationDefinition.determineUrl(AnnotationFilterInvocationDefinition.java:77) at org.codehaus.groovy.grails.plugins.springsecurity.AbstractFilterInvocationDefinition.getAttributes(AbstractFilterInvocationDefinition.java:76) at org.springframework.security.access.intercept.AbstractSecurityInterceptor.beforeInvocation(AbstractSecurityInterceptor.java:171) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:106) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:83) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:355) at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:97) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:355) at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:78) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:355) at org.springframework.security.web.authentication.rememberme.RememberMeAuthenticationFilter.doFilter(RememberMeAuthenticationFilter.java:112) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:355) at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:54) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:355) at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:188) at org.codehaus.groovy.grails.plugins.springsecurity.RequestHolderAuthenticationFilter.doFilter(RequestHolderAuthenticationFilter.java:40) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:355) at org.codehaus.groovy.grails.plugins.springsecurity.MutableLogoutFilter.doFilter(MutableLogoutFilter.java:79) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:355) at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:79) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:355) at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:149) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.codehaus.groovy.grails.web.servlet.mvc.GrailsWebRequestFilter.doFilterInternal(GrailsWebRequestFilter.java:67) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.codehaus.groovy.grails.web.filters.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:66) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:88) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454) at java.lang.Thread.run(Thread.java:619) I have no idea if I am just not using correctly, or if my template needs to be in a special folder... or if Spring-security-core will not allow to be used at all. Help?

    Read the article

  • How to sanely configure security policy in Tomcat 6

    - by Chas Emerick
    I'm using Tomcat 6.0.24, as packaged for Ubuntu Karmic. The default security policy of Ubuntu's Tomcat package is pretty stringent, but appears straightforward. In /var/lib/tomcat6/conf/policy.d, there are a variety of files that establish default policy. Worth noting at the start: I've not changed the stock tomcat install at all -- no new jars into its common lib directory(ies), no server.xml changes, etc. Putting the .war file in the webapps directory is the only deployment action. the web application I'm deploying fails with thousands of access denials under this default policy (as reported to the log thanks to the -Djava.security.debug="access,stack,failure" system property). turning off the security manager entirely results in no errors whatsoever, and proper app functionality What I'd like to do is add an application-specific security policy file to the policy.d directory, which seems to be the recommended practice. I added this to policy.d/100myapp.policy (as a starting point -- I would like to eventually trim back the granted permissions to only what the app actually needs): grant codeBase "file:${catalina.base}/webapps/ROOT.war" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/lib/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/classes/-" { permission java.security.AllPermission; }; Note the thrashing around attempting to find the right codeBase declaration. I think that's likely my fundamental problem. Anyway, the above (really only the first two grants appear to have any effect) almost works: the thousands of access denials are gone, and I'm left with just one. Relevant stack trace: java.security.AccessControlException: access denied (java.io.FilePermission /var/lib/tomcat6/webapps/ROOT/WEB-INF/classes/com/foo/some-file-here.txt read) java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) java.security.AccessController.checkPermission(AccessController.java:546) java.lang.SecurityManager.checkPermission(SecurityManager.java:532) java.lang.SecurityManager.checkRead(SecurityManager.java:871) java.io.File.exists(File.java:731) org.apache.naming.resources.FileDirContext.file(FileDirContext.java:785) org.apache.naming.resources.FileDirContext.lookup(FileDirContext.java:206) org.apache.naming.resources.ProxyDirContext.lookup(ProxyDirContext.java:299) org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:1937) org.apache.catalina.loader.WebappClassLoader.findResource(WebappClassLoader.java:973) org.apache.catalina.loader.WebappClassLoader.getResource(WebappClassLoader.java:1108) java.lang.ClassLoader.getResource(ClassLoader.java:973) I'm pretty convinced that the actual file that's triggering the denial is irrelevant -- it's just some properties file that we check for optional configuration parameters. What's interesting is that: it doesn't exist in this context the fact that the file doesn't exist ends up throwing a security exception, rather than java.io.File.exists() simply returning false (although I suppose that's just a matter of the semantics of the read permission). Another workaround (besides just disabling the security manager in tomcat) is to add an open-ended permission to my policy file: grant { permission java.security.AllPermission; }; I presume this is functionally equivalent to turning off the security manager. I suppose I must be getting the codeBase declaration in my grants subtly wrong, but I'm not seeing it at the moment.

    Read the article

  • CrossPage PostBack in series of web pages

    - by Vishnu Reddy
    I had a requirement to pass data between pages in my application. I have a Page A where user will input some data and on submit User will be redirected to Page B where again user will enter some more data and on submitting user will be show a confirmation in Page C after doing some calculations and data saving. Following is the idea I was trying to use: PageA.aspx: <form id="frmPageA" runat="server"> <p>Name: <asp:TextBox ID="txtName" runat="server"></asp:TextBox></p> <p>Age: <asp:TextBox ID="txtAge" runat="server"></asp:TextBox></p> <p><asp:Button ID="btnPostToPageB" runat="server" Text="Post To PageB" PostBackUrl="~/PageB.aspx" /></p> </form> In Page A Codebehind file I am creating following public properties of the inputs to access in Page B: public string Name { get { return txtName.Text.ToString(); } } public int Age { get { return Convert.ToInt32(txtAge.Text); } } In PageB.aspx: using previouspage directive to access page A public properties <%@ PreviousPageType VirtualPath="~/PageA.aspx" % <form id="frmPageB" runat="server"> <asp:HiddenField ID="hfName" runat="server" /> <asp:HiddenField ID="hfAge" runat="server" /> <p><asp:RadioButtonList ID="rblGender" runat="server" TextAlign="Right" RepeatDirection="Horizontal"> <asp:ListItem Value="Female">Female</asp:ListItem> <asp:ListItem Value="Male">Male</asp:ListItem> </asp:RadioButtonList></p> <p><asp:Button ID="btnPostToPageC" runat="server" Text="Post To PageC" PostBackUrl="~/PageC.aspx" /></p> </form> In Page B Codebehind file I am creating following public properties for the inputs to access in Page C: public RadioButtonList Gender { get { return rblGender; } } public string Name { get { return _name; } } public int Age { get { return _age; } } //checking if data is posted from Page A otherwise redirecting User to Page A protected void Page_Load(object sender, EventArgs e) { if (PreviousPage != null && PreviousPage.IsCrossPagePostBack && PreviousPage.IsValid) { hfName.Value = PreviousPage.Name; hfAge.Value = PreviousPage.Age.ToString(); } else Response.Redirect("PageA.aspx"); }` in PageC.aspx: using previouspage directive to access page A public properties <%@ PreviousPageType VirtualPath="~/PageB.aspx" % <form id="frmPageC" runat="server"> <p>Name: <asp:Label ID="lblName" runat="server"></asp:Label></p> <p>Age: <asp:Label ID="lblAge" runat="server"></asp:Label></p> <p>Gender: <asp:Label ID="lblGender" runat="server"></asp:Label></p> <p><asp:Button ID="btnBack" runat="server" Text="Back" PostBackUrl="~/PageA.aspx" /></p> </form> Page C code behind file: if (PreviousPage != null && PreviousPage.IsCrossPagePostBack && PreviousPage.IsValid) { lblName.Text = PreviousPage.Name.ToString(); lblAge.Text = PreviousPage.Age.ToString(); lblGender.Text = PreviousPage.Gender.SelectedValue.ToString(); } else Response.Redirect("PageA.aspx");

    Read the article

  • How does versioning work when using Boost Serialization for Derived Classes?

    - by Venkata Adusumilli
    When a Client serializes the following data: InternationalStudent student; student.id("Client ID"); student.firstName("Client First Name"); student.country("Client Country"); the Server receives the following: ID = "Client ID" Country = "Client First Name" instead of the following: ID = "Client ID" Country = "Client Country" The only difference between the Server and Client classes is the First Name of the Student. How can we make the Server ignore First Name recieved from the Client and process the Country? Server Side Classes class Student { public: Student(){} virtual ~Student(){} public: std::string id() { return idM; } void id(std::string id) { idM = id; } protected: friend class boost::serialization::access; protected: std::string idM; protected: template<class A> void serialize(A& archive, const unsigned int /*version*/) { archive & BOOST_SERIALIZATION_NVP(idM); } }; class InternationalStudent : public Student { public: InternationalStudent() {} ~InternationalStudent() {} public: std::string country() { return countryM; } void country(std::string country) { countryM = country; } protected: friend class boost::serialization::access; protected: std::string countryM; protected: template<class A> void serialize(A& archive, const unsigned int /*version*/) { archive & BOOST_SERIALIZATION_NVP(boost::serialization::base_object<Student>(*this)); archive & BOOST_SERIALIZATION_NVP(countryM); } }; Client Side Classes class Student { public: Student(){} virtual ~Student(){} public: std::string id() { return idM; } void id(std::string id) { idM = id; } std::string firstName() { return firstNameM; } void firstName(std::string name) { firstNameM = name; } protected: friend class boost::serialization::access; protected: std::string idM; std::string firstNameM; protected: template<class A> void serialize(A& archive, const unsigned int /*version*/) { archive & BOOST_SERIALIZATION_NVP(idM); if (version >=1) { archive & BOOST_SERIALIZATION_NVP(firstNameM); } } }; BOOST_CLASS_VERSION(Student, 1) class InternationalStudent : public Student { public: InternationalStudent() {} ~InternationalStudent() {} public: std::string country() { return countryM; } void country(std::string country) { countryM = country; } protected: friend class boost::serialization::access; protected: std::string countryM; protected: template<class A> void serialize(A& archive, const unsigned int /*version*/) { archive & BOOST_SERIALIZATION_NVP(boost::serialization::base_object<Student>(*this)); archive & BOOST_SERIALIZATION_NVP(countryM); } };

    Read the article

  • drupal_get_form is not passing along node array

    - by ElectronicBlacksmith
    I have not been able to get drupal_get_form to pass on the node data. Code snippets are below. The drupal_get_form documentation (api.drupal.org) states that it will pass on the extra parameters. I am basing the node data not being passed because (apparently) $node['language'] is not defined in hook_form which causes $form['qqq'] not to be created and thus the preview button shows up. My goal here is that the preview button show up using path "node/add/author" but doesn't show up for "milan/author/add". Any alternative methods for achieving this goal would be helpful but the question I want answered is in the preceding paragraph. Everything I've read indicates that it should work. This menu item $items['milan/author/add'] = array( 'title' = 'Add Author', 'page callback' = 'get_author_form', 'access arguments' = array('access content'), 'file' = 'author.pages.inc', ); calls this code function get_author_form() { //return node_form(NULL,NULL); //return drupal_get_form('author_form'); return author_ajax_form('author'); } function author_ajax_form($type) { global $user; module_load_include('inc', 'node', 'node.pages'); $types = node_get_types(); $type = isset($type) ? str_replace('-', '_', $type) : NULL; // If a node type has been specified, validate its existence. if (isset($types[$type]) && node_access('create', $type)) { // Initialize settings: $node = array('uid' = $user-uid, 'name' = (isset($user-name) ? $user-name : ''), 'type' = $type, 'language' = 'bbb','bbb' = 'TRUE'); $output = drupal_get_form($type .'_node_form', $node); } return $output; } And here is the hook_form and hook_form_alter code function author_form_author_node_form_alter(&$form, &$form_state) { $form['author']=NULL; $form['taxonomy']=NULL; $form['options']=NULL; $form['menu']=NULL; $form['comment_settings']=NULL; $form['files']=NULL; $form['revision_information']=NULL; $form['attachments']=NULL; if($form["qqq"]) { $form['buttons']['preview']=NULL; } } function author_form(&$node) { return make_author_form(&$node); } function make_author_form(&$node) { global $user; $type = node_get_types('type', $node); $node = author_make_title($node); drupal_set_breadcrumb(array(l(t('Home'), NULL), l(t($node-title), 'node/' . $node-nid))); $form['authorset'] = array( '#type' = 'fieldset', '#title' = t('Author'), '#weight' = -50, '#collapsible' = FALSE, '#collapsed' = FALSE, ); $form['author_id'] = array( '#access' = user_access('create pd_recluse entries'), '#type' = 'hidden', '#default_value' = $node-author_id, '#weight' = -20 ); $form['authorset']['last_name'] = array( '#type' = 'textfield', '#title' = t('Last Name'), '#maxlength' = 60, '#default_value' = $node-last_name ); $form['authorset']['first_name'] = array( '#type' = 'textfield', '#title' = t('First Name'), '#maxlength' = 60, '#default_value' = $node-first_name ); $form['authorset']['middle_name'] = array( '#type' = 'textfield', '#title' = t('Middle Name'), '#maxlength' = 60, '#default_value' = $node-middle_name ); $form['authorset']['suffix_name'] = array( '#type' = 'textfield', '#title' = t('Suffix Name'), '#maxlength' = 14, '#default_value' = $node-suffix_name ); $form['authorset']['body_filter']['body'] = array( '#access' = user_access('create pd_recluse entries'), '#type' = 'textarea', '#title' = 'Describe Author', '#default_value' = $node-body, '#required' = FALSE, '#weight' = -19 ); $form['status'] = array( '#type' = 'hidden', '#default_value' = '1' ); $form['promote'] = array( '#type' = 'hidden', '#default_value' = '1' ); $form['name'] = array( '#type' = 'hidden', '#default_value' = $user-name ); $form['format'] = array( '#type' = 'hidden', '#default_value' = '1' ); // NOTE in node_example there is some addition code here not needed for this simple node-type $thepath='milan/author'; if($_REQUEST["theletter"]) { $thepath .= "/" . $_REQUEST["theletter"]; } if($node['language']) { $thepath='milan/authorajaxclose'; $form['qqq'] = array( '#type' = 'hidden', '#default_value' = '1' ); } $form['#redirect'] = $thepath; return $form; } That menu path coincides with this theme (PHPTemplate)

    Read the article

  • Is this a right way to use NHibernate?

    - by Venemo
    I spent the rest of the evening reading StackOverflow questions and also some blog entries and links about the subject. All of them turned out to be very helpful, but I still feel that they don't really answer my question. So, I'm developing a simple web application. I'd like to create a reusable data access layer which I can later reuse in other solutions. 99% of these will be web applications. This seems to be a good excuse for me to learn NHibernate and some of the patterns around it. My goals are the following: I don't want the business logic layer to know ANYTHING about the inner workings of the database, nor NHibernate itself. I want the business logic layer to have the least possible number of assumptions about the data access layer. I want the data access layer as simplistic and easy-to-use as possible. This is going to be a simple project, so I don't want to overcomplicate anything. I want the data access layer to be as non-intrusive as possible. Will all this in mind, I decided to use the popular repository pattern. I read about this subject on this site and on various dev blogs, and I heard some stuff about the unit of work pattern. I also looked around and checked out various implementations. (Including FubuMVC contrib, and SharpArchitecture, and stuff on some blogs.) I found out that most of these operate with the same principle: They create a "unit of work" which is instantiated when a repository is instantiated, they start a transaction, do stuff, and commit, and then start all over again. So, only one ISession per Repository and that's it. Then the client code needs to instantiate a repository, do stuff with it, and then dispose. This usage pattern doesn't meet my need of being as simplistic as possible, so I began thinking about something else. I found out that NHibernate already has something which makes custom "unit of work" implementations unnecessary, and that is the CurrentSessionContext class. If I configure the session context correctly, and do the clean up when necessary, I'm good to go. So, I came up with this: I have a static class called NHibernateHelper. Firstly, it has a static property called CurrentSessionFactory, which upon first call, instantiates a session factory and stores it in a static field. (One ISessionFactory per one AppDomain is good enough.) Then, more importantly, it has a CurrentSession static property, which checks if there is an ISession bound to the current session context, and if not, creates one, and binds it, and it returns with the ISession bound to the current session context. Because it will be used mostly with WebSessionContext (so, one ISession per HttpRequest, although for the unit tests, I configured ThreadStaticSessionContext), it should work seamlessly. And after creating and binding an ISession, it hooks an event handler to the HttpContext.Current.ApplicationInstance.EndRequest event, which takes care of cleaning up the ISession after the request ends. (Of course, it only does this if it is really running in a web environment.) So, with all this set up, the NHibernateHelper will always be able to return a valid ISession, so there is no need to instantiate a Repository instance for the "unit of work" to operate properly. Instead, the Repository is a static class which operates with the ISession from the NHibernateHelper.CurrentSession property, and exposes some functionality through that. I'm curious, what do you think about this? Is it a valid way of thinking, or am I completely off track here?

    Read the article

  • Symfony 1.4/ Doctrine; n-m relation data cannot be accessed in template (indexSuccess)

    - by chandimak
    I have a database with 3 tables. It's a simple n-m relationship. Student, Course and StudentHasCourse to handle n-m relationship. I post the schema.yml for reference, but it would not be really necessary. Course: connection: doctrine tableName: course columns: id: type: integer(4) fixed: false unsigned: false primary: true autoincrement: false name: type: string(45) fixed: false unsigned: false primary: false notnull: false autoincrement: false relations: StudentHasCourse: local: id foreign: course_id type: many Student: connection: doctrine tableName: student columns: id: type: integer(4) fixed: false unsigned: false primary: true autoincrement: false registration_details: type: string(45) fixed: false unsigned: false primary: false notnull: false autoincrement: false name: type: string(30) fixed: false unsigned: false primary: false notnull: false autoincrement: false relations: StudentHasCourse: local: id foreign: student_id type: many StudentHasCourse: connection: doctrine tableName: student_has_course columns: student_id: type: integer(4) fixed: false unsigned: false primary: true autoincrement: false course_id: type: integer(4) fixed: false unsigned: false primary: true autoincrement: false result: type: string(1) fixed: true unsigned: false primary: false notnull: false autoincrement: false relations: Course: local: course_id foreign: id type: one Student: local: student_id foreign: id type: one Then, I get data from tables in executeIndex() from the following query. $q_info = Doctrine_Query::create() ->select('s.*, shc.*, c.*') ->from('Student s') ->leftJoin('s.StudentHasCourse shc') ->leftJoin('shc.Course c') ->where('c.id = 1'); $this->infos = $q_info->execute(); Then I access data by looping through in indexSuccess.php. But, in indexSuccess I can only access data from the table Student. <?php foreach ($infos as $info): ?> <?php echo $info->getId(); ?> <?php echo $info->getName(); ?> <?php endforeach; ?> I expected, that I could access StudentHasCourse data and Course data like the following. But, it generates an error. <?php echo $info->getStudentHasCourse()->getResult()?> <?php echo $info->getStudentHasCourse()->getCourse()->getName()?> The first statement gives a warning; Warning: call_user_func_array() expects parameter 1 to be a valid callback, class 'Doctrine_Collection' does not have a method 'getCourse' in D:\wamp\bin\php\php5.3.5\PEAR\pear\symfony\escaper\sfOutputEscaperObjectDecorator.class.php on line 64 And the second statement gives the above warning and the following error; Fatal error: Call to a member function getName() on a non-object in D:\wamp\www\sam\test_doc_1\apps\frontend\modules\registration\templates\indexSuccess.php on line 5 When I check the query from the Debug toolbar it appears as following and it gives all data I want. SELECT s.id AS s__id, s.registration_details AS s__registration_details, s.name AS s__name, s2.student_id AS s2__student_id, s2.course_id AS s2__course_id, s2.result AS s2__result, c.id AS c__id, c.name AS c__name FROM student s LEFT JOIN student_has_course s2 ON s.id = s2.student_id LEFT JOIN course c ON s2.course_id = c.id WHERE (c.id = 1) Though the question is short, as all the information mentioned it became so long. It's highly appreciated if someone can help me out to solve this. What I require is to access the data from StudentHasCourse and Course. If those data cannot be accessed by this design and this query, any other methodology is also appreciated.

    Read the article

  • Make c# matrix code faster

    - by Wam
    Hi all, Working on some matrix code, I'm concerned of performance issues. here's how it works : I've a IMatrix abstract class (with all matrices operations etc), implemented by a ColumnMatrix class. abstract class IMatrix { public int Rows {get;set;} public int Columns {get;set;} public abstract float At(int row, int column); } class ColumnMatrix : IMatrix { private data[]; public override float At(int row, int column) { return data[row + columns * this.Rows]; } } This class is used a lot across my application, but I'm concerned with performance issues. Testing only read for a 2000000x15 matrix against a jagged array of the same size, I get 1359ms for array access agains 9234ms for matrix access : public void TestAccess() { int iterations = 10; int rows = 2000000; int columns = 15; ColumnMatrix matrix = new ColumnMatrix(rows, columns); for (int i = 0; i < rows; i++) for (int j = 0; j < columns; j++) matrix[i, j] = i + j; float[][] equivalentArray = matrix.ToRowsArray(); TimeSpan totalMatrix = new TimeSpan(0); TimeSpan totalArray = new TimeSpan(0); float total = 0f; for (int iteration = 0; iteration < iterations; iteration++) { total = 0f; DateTime start = DateTime.Now; for (int i = 0; i < rows; i++) for (int j = 0; j < columns; j++) total = matrix.At(i, j); totalMatrix += (DateTime.Now - start); total += 1f; //Ensure total is read at least once. total = total > 0 ? 0f : 0f; start = DateTime.Now; for (int i = 0; i < rows; i++) for (int j = 0; j < columns; j++) total = equivalentArray[i][j]; totalArray += (DateTime.Now - start); } if (total < 0f) logger.Info("Nothing here, just make sure we read total at least once."); logger.InfoFormat("Average time for a {0}x{1} access, matrix : {2}ms", rows, columns, totalMatrix.TotalMilliseconds); logger.InfoFormat("Average time for a {0}x{1} access, array : {2}ms", rows, columns, totalArray.TotalMilliseconds); Assert.IsTrue(true); } So my question : how can I make this thing faster ? Is there any way I can make my ColumnMatrix.At faster ? Cheers !

    Read the article

  • Employee Info Starter Kit - Visual Studio 2010 and .NET 4.0 Version (4.0.0) Available

    - by joycsharp
    Employee Info Starter Kit is a ASP.NET based web application, which includes very simple user requirements, where we can create, read, update and delete (crud) the employee info of a company. Based on just a database table, it explores and solves all major problems in web development architectural space.  This open source starter kit extensively uses major features available in latest Visual Studio, ASP.NET and Sql Server to make robust, scalable, secured and maintanable web applications quickly and easily. Since it's first release, this starter kit achieved a huge popularity in web developer community and includes 1,40,000+ download from project web site. Visual Studio 2010 and .NET 4.0 came up with lots of exciting features to make software developers life easier.  A new version (v4.0.0) of Employee Info Starter Kit is now available in both MSDN Code Gallery and CodePlex. Chckout the latest version of this starter kit to enjoy cool features available in Visual Studio 2010 and .NET 4.0. [ Release Notes ] Architectural Overview Simple 2 layer architecture (user interface and data access layer) with 1 optional cache layer ASP.NET Web Form based user interface Custom Entity Data Container implemented (with primitive C# types for data fields) Active Record Design Pattern based Data Access Layer, implemented in C# and Entity Framework 4.0 Sql Server Stored Procedure to perform actual CRUD operation Standard infrastructure (architecture, helper utility) for automated integration (bottom up manner) and unit testing Technology UtilizedProgramming Languages/Scripts Browser side: JavaScript Web server side: C# 4.0 Database server side: T-SQL .NET Framework Components .NET 4.0 Entity Framework .NET 4.0 Optional/Named Parameters .NET 4.0 Tuple .NET 3.0+ Extension Method .NET 3.0+ Lambda Expressions .NET 3.0+ Aanonymous Type .NET 3.0+ Query Expressions .NET 3.0+ Automatically Implemented Properties .NET 3.0+ LINQ .NET 2.0 + Partial Classes .NET 2.0 + Generic Type .NET 2.0 + Nullable Type   ASP.NET 3.5+ List View (TBD) ASP.NET 3.5+ Data Pager (TBD) ASP.NET 2.0+ Grid View ASP.NET 2.0+ Form View ASP.NET 2.0+ Skin ASP.NET 2.0+ Theme ASP.NET 2.0+ Master Page ASP.NET 2.0+ Object Data Source ASP.NET 1.0+ Role Based Security Visual Studio Features Visual Studio 2010 CodedUI Test Visual Studio 2010 Layer Diagram Visual Studio 2010 Sequence Diagram Visual Studio 2010 Directed Graph Visual Studio 2005+ Database Unit Test Visual Studio 2005+ Unit Test Visual Studio 2005+ Web Test Visual Studio 2005+ Load Test Sql Server Features Sql Server 2005 Stored Procedure Sql Server 2005 Xml type Sql Server 2005 Paging support

    Read the article

  • Employee Info Starter Kit - Visual Studio 2010 and .NET 4.0 Version (4.0.0) Available

    - by Mohammad Ashraful Alam
    Employee Info Starter Kit is a ASP.NET based web application, which includes very simple user requirements, where we can create, read, update and delete (crud) the employee info of a company. Based on just a database table, it explores and solves most of the major problems in web development architectural space.  This open source starter kit extensively uses major features available in latest Visual Studio, ASP.NET and Sql Server to make robust, scalable, secured and maintanable web applications quickly and easily. Since it's first release, this starter kit achieved a huge popularity in web developer community and includes 1,40,000+ download from project web site. Visual Studio 2010 and .NET 4.0 came up with lots of exciting features to make software developers life easier.  A new version (v4.0.0) of Employee Info Starter Kit is now available in both MSDN Code Gallery and CodePlex. Chckout the latest version of this starter kit to enjoy cool features available in Visual Studio 2010 and .NET 4.0. [ Release Notes ] Architectural Overview Simple 2 layer architecture (user interface and data access layer) with 1 optional cache layer ASP.NET Web Form based user interface Custom Entity Data Container implemented (with primitive C# types for data fields) Active Record Design Pattern based Data Access Layer, implemented in C# and Entity Framework 4.0 Sql Server Stored Procedure to perform actual CRUD operation Standard infrastructure (architecture, helper utility) for automated integration (bottom up manner) and unit testing Technology UtilizedProgramming Languages/Scripts Browser side: JavaScript Web server side: C# 4.0 Database server side: T-SQL .NET Framework Components .NET 4.0 Entity Framework .NET 4.0 Optional/Named Parameters .NET 4.0 Tuple .NET 3.0+ Extension Method .NET 3.0+ Lambda Expressions .NET 3.0+ Aanonymous Type .NET 3.0+ Query Expressions .NET 3.0+ Automatically Implemented Properties .NET 3.0+ LINQ .NET 2.0 + Partial Classes .NET 2.0 + Generic Type .NET 2.0 + Nullable Type   ASP.NET 3.5+ List View (TBD) ASP.NET 3.5+ Data Pager (TBD) ASP.NET 2.0+ Grid View ASP.NET 2.0+ Form View ASP.NET 2.0+ Skin ASP.NET 2.0+ Theme ASP.NET 2.0+ Master Page ASP.NET 2.0+ Object Data Source ASP.NET 1.0+ Role Based Security Visual Studio Features Visual Studio 2010 CodedUI Test Visual Studio 2010 Layer Diagram Visual Studio 2010 Sequence Diagram Visual Studio 2010 Directed Graph Visual Studio 2005+ Database Unit Test Visual Studio 2005+ Unit Test Visual Studio 2005+ Web Test Visual Studio 2005+ Load Test Sql Server Features Sql Server 2005 Stored Procedure Sql Server 2005 Xml type Sql Server 2005 Paging support

    Read the article

  • PowerShell Script to Deploy Multiple VM on Azure in Parallel #azure #powershell

    - by Marco Russo (SQLBI)
    This blog is usually dedicated to Business Intelligence and SQL Server, but I didn’t found easily on the web simple PowerShell scripts to help me deploying a number of virtual machines on Azure that I use for testing and development. Since I need to deploy, start, stop and remove many virtual machines created from a common image I created (you know, Tabular is not part of the standard images provided by Microsoft…), I wanted to minimize the time required to execute every operation from my Windows Azure PowerShell console (but I suggest you using Windows PowerShell ISE), so I also wanted to fire the commands as soon as possible in parallel, without losing the result in the console. In order to execute multiple commands in parallel, I used the Start-Job cmdlet, and using Get-Job and Receive-Job I wait for job completion and display the messages generated during background command execution. This technique allows me to reduce execution time when I have to deploy, start, stop or remove virtual machines. Please note that a few operations on Azure acquire an exclusive lock and cannot be really executed in parallel, but only one part of their execution time is subject to this lock. Thus, you obtain a better response time also in these scenarios (this is the case of the provisioning of a new VM). Finally, when you remove the VMs you still have the disk containing the virtual machine to remove. This cannot be done just after the VM removal, because you have to wait that the removal operation is completed on Azure. So I wrote a script that you have to run a few minutes after VMs removal and delete disks (and VHD) no longer related to a VM. I just check that the disk were associated to the original image name used to provision the VMs (so I don’t remove other disks deployed by other batches that I might want to preserve). These examples are specific for my scenario, if you need more complex configurations you have to change and adapt the code. But if your need is to create multiple instances of the same VM running in a workgroup, these scripts should be good enough. I prepared the following PowerShell scripts: ProvisionVMs: Provision many VMs in parallel starting from the same image. It creates one service for each VM. RemoveVMs: Remove all the VMs in parallel – it also remove the service created for the VM StartVMs: Starts all the VMs in parallel StopVMs: Stops all the VMs in parallel RemoveOrphanDisks: Remove all the disks no longer used by any VMs. Run this script a few minutes after RemoveVMs script. ProvisionVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   # Name of storage account (where VMs will be deployed) $StorageAccount = "Copy the Label property you get from Get-AzureStorageAccount"   function ProvisionVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName) $Location = "Copy the Location property you get from Get-AzureStorageAccount" $InstanceSize = "A5" # You can use any other instance, such as Large, A6, and so on $AdminUsername = "UserName" # Write the name of the administrator account in the new VM $Password = "Password"      # Write the password of the administrator account in the new VM $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }         New-AzureVMConfig -Name $VmName -ImageName $Image -InstanceSize $InstanceSize |             Add-AzureProvisioningConfig -Windows -Password $Password -AdminUsername $AdminUsername|             New-AzureVM -Location $Location -ServiceName "$VmName" -Verbose     } }   # Set the proper storage - you might remove this line if you have only one storage in the subscription Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccount $StorageAccount   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list provisions one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed ProvisionVM "test10" ProvisionVM "test11" ProvisionVM "test12" ProvisionVM "test13" ProvisionVM "test14" ProvisionVM "test15" ProvisionVM "test16" ProvisionVM "test17" ProvisionVM "test18" ProvisionVM "test19" ProvisionVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup of jobs Remove-Job *   # Displays batch completed echo "Provisioning VM Completed" RemoveVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function RemoveVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Remove-AzureService -ServiceName $VmName -Force -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list remove one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed RemoveVM "test10" RemoveVM "test11" RemoveVM "test12" RemoveVM "test13" RemoveVM "test14" RemoveVM "test15" RemoveVM "test16" RemoveVM "test17" RemoveVM "test18" RemoveVM "test19" RemoveVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Remove VM Completed" StartVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StartVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Start-AzureVM -Name $VmName -ServiceName $VmName -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list starts one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StartVM "test10" StartVM "test11" StartVM "test11" StartVM "test12" StartVM "test13" StartVM "test14" StartVM "test15" StartVM "test16" StartVM "test17" StartVM "test18" StartVM "test19" StartVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Start VM Completed"   StopVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StopVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Stop-AzureVM -Name $VmName -ServiceName $VmName -Verbose -Force     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list stops one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StopVM "test10" StopVM "test11" StopVM "test12" StopVM "test13" StopVM "test14" StopVM "test15" StopVM "test16" StopVM "test17" StopVM "test18" StopVM "test19" StopVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Stop VM Completed" RemoveOrphanDisks $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }   # Remove all orphan disks coming from the image specified in $ImageName Get-AzureDisk |     Where-Object {$_.attachedto -eq $null -and $_.SourceImageName -eq $ImageName} |     Remove-AzureDisk -DeleteVHD -Verbose  

    Read the article

< Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >