Search Results

Search found 10657 results on 427 pages for 'group'.

Page 174/427 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Dovecot starting and running, but not listening on any port

    - by Dženis Macanovic
    Among others things I'm in charge of a Debian GNU/Linux (Wheezy) DomU for the mail services of the company i work for. Yesterday one HDD that was used for this particular server has died. After installing Debian again, Dovecot decided to no longer listen on any ports (checked with netstat -l). Other services (like Postfix and MySQL) work without problems. dovecot -n: # 2.1.7: /etc/dovecot/dovecot.conf # OS: Linux 3.2.0-3-amd64 x86_64 Debian wheezy/sid ext3 auth_mechanisms = plain login disable_plaintext_auth = no first_valid_uid = 150 last_valid_uid = 150 mail_gid = mail mail_location = maildir:/var/vmail/%d/%n mail_uid = vmail namespace inbox { inbox = yes location = prefix = } pass db { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } plugin { sieve = ~/.dovecot.sieve sieve_dir = ~/sieve } service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-userdb { group = mail mode = 0666 user = vmail } } service imap-login { inet_listener imaps { port = 993 ssl = yes } } service pop3-login { inet_listener pop3s { port = 995 ssl = yes } } ssl_cert = </etc/ssl/private/mail.crt ssl_key = </etc/ssl/private/mail.key userdb { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql } protocol imap { mail_max_userip_connections = 25 } UID 150 is vmail (I double checked file permissions). I didn't install Dovecot from source, but via apt from the official Debian US mirror. There are no messages concerning Dovecot in /var/log/syslog except for: Oct 21 06:36:29 server dovecot: master: Dovecot v2.1.7 starting up (core dumps disabled) Any ideas?

    Read the article

  • Per-mailbox IMAP settings in Exchange 2003 apply successfully but revert to server default

    - by erictheavg
    The title says most of it. I have a Spiceworks mailbox that connects to our Exchange Server 2003 box via IMAP for receiving help desk issues. But for complicated reasons, I want it to receive those emails in text-only format. So, I discovered that you can just go to: Exchange System Manager Administrative Groups First Administrative Group First Storage Group Mailbox Store Mailboxes Right-click the mailbox, Configure Exchange Features Edit the properties for IMAP Set that mailbox to only receive message bodies as plain text. I click OK, then Next, it reports success, and I assume I'm done. But then when I go right back to where I was, I see that "Use protocol defaults" is still checked. Anyone have a clue why this would be? Some other details: I'm logged in as Administrator when I do this. I can't change this setting for the entire IMAP virtual server because some regular users use it. I only have one IP address to play with, which means I can't create another IMAP virtual server. Any suggestions or ideas are greatly appreciated!

    Read the article

  • Public Folders - Delete Public Folders from 2003 after migrating to 2010 (via Adsiedit) - safe?

    - by HeavenCore
    Similar Question: How do I delete a public store in Exchange 2003? We are ready to remove our Exchange 2003 server after having migrated all public folders and mailboxes to 2010. We ran for a week with the exchange 2003 server shutdown and everything seemed to work. When I try to delete the PF database from 2003 it says it contains replicas. Whilst migrating i only had one was sync working (from 2003 to 2010) so i believe that 2003 hasn't received the responses from 2010 saying replica removed. When I look in Public folders on the 2003 box none are listed, when i look in PF Instances they are all listed. I know everything has moved to the 2010 server and I know 2010 is not showing the 2003 server as a replica for any folders. I am looking to use ADSI edit to remove the Public folder database from the 2003 server, but want to ensure i am going to delete the right thing so that they do not get deleted from the 2010 database. Should i delete configuration, Services, Microsoft Exchange, Company Name, Administrative groups, First administrative group, Servers, Server name, Information store, First storage group, public folder store (Server name)? Or something else? I have checked and the only public folder with the old exchange server listed as a replica is SYSTEM CONFIGURATION. Thanks in advance.

    Read the article

  • DFS Root namespace is RDWR for all users

    - by Patrick
    We have an existing DFS Replication and Namespace group that we use to serve the company's files. This has been operating fine for us for some time now, and continues to do so. however a situation arose yesterday afternoon that has led us to be stumped. The problem is that we have our name space presented as : \\domain.co.uk\public\[8 or 9 folders that are mapped to the users in the business] We had a problem this morning that meant that a number of users started mapping their AD Home Drive directly to the \\domain.co.uk\public directory and we found that they had read/write. This rapidly became a problem as a at least one director saved some moderately sensitive documents in there and basically anyone could read them. I've tidied up that specific problem with some deft scripting and a slight modification of group policy. However I would like to make \public read only, the trouble is I can't work out where the ACLs for that folder would be held. All the folders that are presented as \\domain.co.uk\public\[folder] are 'real' folders on logical volumes on our DFS servers so are secured with groups that are applied via the 'security' tab. I'd like to do the same on \public but I can't find it. I have looked through amongst other things \Sysvol\domain.co.uk but can't find it and after a lot of clicking and a bit of reading I can't see how to lock it down. Any thoughts?

    Read the article

  • Understanding Unix Permissions (w/ ACL)

    - by Dr. DOT
    I am trying to set permissions on my server properly. Currently I have a number of directories and files chmod'd at 0777 -- but I am not comfortable with it being this way. So at the advice of a serverfault specialist, I had my hosting provider install ACL on my shared virtual server. When I FTP to the server as my FTP user account "abc", I can do everything I need to do (and rightfully so) because all my dirs and files are owned by "abc", the group is "abc", and the 1st octet is set to 7 (rwx). That much I get. But here's where it gets dark gray for me. PHP is set to user "nobody". so when someone browses on of my web pages that either ends in .php or has some embedded PHP, I assume the last octet controls the access. Because all my dirs and files are owned by "abc" and assigned to group "abc", if the last octet was a 4 (r--) then the server would let the browser read the file. If it were a 6 (rw-) then the server would let the browser also write to the file or directory, correct? what if the web document does not end in .php or does not have any PHP embedded? What is the user then? how can I use ACL to not set the permission to 6 (rw-) or even 7 (rwx)? [not sure what execute does or means] Just looking for some sort of policy settings to best lock down my dirs and files while allowing my PHP scripts to do uploads and write to files (so my users don't call me to tell me "permission denied". Ok, thanks to anyone out there willing to lend me a hand. It is greatly appreciated.

    Read the article

  • Last step in HDD Recovery (fixing windows)

    - by Atom Computing
    My dad’s hard drive corrupted which was a result of many bad sectors. Anyway, I made a clone of the drive and have now repaired it totally (recreating the MBR and MFT) and doing a series of ChkDsk's on it. I can now see all the files and folder on it and it is all intact. I currently have it as a slave in my computer (where I was doing all the repairs). When putting it back into the computer, it comes up with "A disk read error occurred: Press Ctrl + Alt + Del to Restart". I don’t know why this is happening but think it might have something to do with file permissions. I have tried a start-up recovery on the Vista boot CD and it found no problems. When trying to apply file permissions (and creating file perms for the SYSTEM group (as it didn’t have any for SYSTEM group)) it couldn't apply them for some of the System32 folder files. I have tried applying them as admin and with as powerful privileges I can get. All to no avail. When it is in my PC I can boot it up (I added it into my bootloader) and it boots up fine except when it logs in it comes up with the error - "Rundll32.exe - Windows cannot access the specified device, path or file. You may not have the appropriate permissions to access the item" This message keeps coming back and nothing loads at all. Any help would be greatly received as I have got so far with the data recovery and want to avoid a reformat at all costs due to the vast number of programs installed and I don’t have much time on my hands! Thanks

    Read the article

  • NGINX AEGIR DRUPAL permissions 403 forbidden

    - by nlam
    New to nginx Installed on mac os for use with aegir & drupal It's running great, but I have a problem with permissions My hostmaster installation is here : /var/aegir/hostmaster-6.x-1.7/ The hostmaster settings file here : /var/aegir/hostmaster-6.x-1.7/sites/aegir.ldev/settings.php Permissions for settings.php are set to 440 automatically by hostmaster, but I'm getting a 403 forbidden page because of this. If I give read permission to "other" the site works great (444 or even 004). Drupal is also telling me that the file system paths are not writable (sites/aegir.ldev/files & sites/aegir.ldev/private). I would have to change the permissions there too. Moreover, I would also have to change permissions for every site installed by hostmaster. Anyway. In my nginx.conf I have the following : user "myuser" _www; Owner and group for settings.php, /sites/example.ldev/files, /sites/example.ldev/private are "myuser" and "_www". Changing permissions to 004 solves this problem, but really confuses me. Why do "other" have permission and not owner or group? I've checked the processes running in activity monitor. Nginx is running as "myuser". Except for one process running as root. So I'm stumped. Hope someone can help.

    Read the article

  • Restrict access to one SVN repository (overwrite default)

    - by teel
    I'm trying to set up our SVN server so that by default the group developers will have access to all repositories, but I want to override that setting on some certain repositories where I want to allow access only to single defined users (or separate groups) The current configuration is SVN + WebDAV on Apache2. All my repositories are located at /var/lib/svn/ In dav_svn.authz I currently have [/] @developers = rw @users = r Now I want to add one repository (let's call it secret_repo) that would only allow access to one user who is also a member of the developers group.¨ I tried to do [secret_repo:/] * = secret_user = rw Where secret_user is the user I'd like to give access to the repository, but it doesn't seem to work. Currently the server is using Apache's LDAP module to authenticate users from our active directory domain and I'd like to keep it that way if possible. Also I seem to be able to browse all my repos freely with any web browser, which I'd like to block. Second problem is that I have webSVN on the server, which is using Apache's LDAP authentication. Everyone who is a member of our domain can access it, so I'd like to hide this secret_repo from websvn listing. It's configured not with parentPath("/var/lib/svn");. Do I really need to remove that and add every repository separately, except the ones I want to hide?

    Read the article

  • Almost All Logical Volumes Disappeared - Recovery?

    - by Alex
    We had a hard disc crash of one of two hard discs in a software raid with a LVM on top. The server is running Citrix xenserver. On the hard disk which is still intact, the volume group gets detected well, but only one LV is left. (some hashes replaced by "x") # lvdisplay --- Logical volume --- LV Name /dev/VG_XenStorage-x-x-x-x-408b91acdcae/MGT VG Name VG_XenStorage-x-x-x-x-408b91acdcae LV UUID x-x-x-x-x-x-vQmZ6C LV Write Access read/write LV Status available # open 0 LV Size 4.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@rescue ~ # vgdisplay --- Volume group --- VG Name VG_XenStorage-x-x-x-x-408b91acdcae System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 698.62 GiB PE Size 4.00 MiB Total PE 178848 Alloc PE / Size 1 / 4.00 MiB Free PE / Size 178847 / 698.62 GiB VG UUID x-x-x-x-x-x-53w0kL I could understand if a full physical volume is lost - but why only the logical volumes? Is there any explanation for this? Is there any way to recover the logical volumes? EDIT We are here in a rescue system. The problem is that the whole server does not boot (GRUB error 22) What we are trying to do is to access the root filesystem. But everything was in the LVM. We have only this: (parted) print Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 750GB 750GB primary boot, lvm And this 750GB LVM volume is exactly what we see on top.

    Read the article

  • How to protect folder privacy against unethical network administrators? [closed]

    - by Trevor Trovalds
    I just need a technical solution for the sake of my group's shared passwords, projects, works, etc. safety. Our network has Active Directory with public/groups/users and NTFS permissions, under a Windows Server 2003 which will soon migrate to Windows Server 2008 R2. Our IT crowd is small, consisting of 2 DBAs, 4 designers, 6 developers (including me), 2 netadmins and (a lot of) tech supporters, everyone has local admin rights. Those 2 network admins weren't the ones who set the network up, they just took the lift recently when the previous ones quit. We usually find them laughing at private contents from users stored in the groups AD, sabotaging documents that don't match their personal tastes and, finally, this week we found out they stole a project we (developers and DBAs) were finishing and, long before, they presented it to the CEO as theirs without us knowing. I'm a systems analyst, and initially my group decided to store critical content, like shared passwords, inside encrypted .zip files. Unfortunately we couldn't do the same to the other hundreds of folders and files, which included the stolen project, because the zipping process would take too long for every update. We also tried an encrypted Subversion repository under SSL, but there are many dummies (~38 atm) involved in the projects that have trouble using TortoiseSVN when contributing, and very oftenly we had to fix messed up updates. Well, I think these two give the idea of what we've been trying to reach. So, is there a practical "individual" protection for our extensive data or my hope can already be euthanized? P.S.: Seriously, at the place where I live/work, political corruption gone the wildest, so denounce related options are likely impracticable. Yet both netadmins have strong "political bond" with the CEO and the President, hence their lousy behavior and our failed delation attempts.

    Read the article

  • Why is there an extra HDD under /dev being added in my Linux Kernel?

    - by user1279156
    I have created a Linux kernel and for some reason an extra drive is always added at bootup. My hard drive is listed as /dev/sdb. /dev/sda is created too, and it is 8 MB in size. I can't find anything in the kernel config that is creating this, but if I use a different kernel it is not there. Kernel logs show it as an attached SCSI device, looks just like my hard drive but only 8 MB, and has no partition table. It also doesn't appear to be a physical device. I've tried the kernel on many different models of PCs and it is always there. Does anyone know how to remove it? /dev/disk/by-id gives me: scsi-1AMCC_U21413034D98EB000584 scsi-1AMCC_U21413034D98EB000584-part1 scsi-353333330000007d0 scsi-SATA_ST3250312AS_5VY7SH42 scsi-SATA_WDC_WD800JD-60L_WD-WMAM9Y085675 scsi-SATA_WDC_WD800JD-60L_WD-WMAM9Y085675-part1 scsi-SATA_WDC_WD800JD-60L_WD-WMAM9Y085675-part2 hdparm -i /dev/sda gives me an "invalid argument". dd if=/dev/sda of=sda.img the resulting file does not have any content sdparm results: /dev/sda: Linux scsi_debug 0004 Device identification VPD page: Addressed logical unit: designator type: T10 vendor identification, code set: ASCII vendor id: Linux vendor specific: scsi_debug 2000 designator type: NAA, code set: Binary 0x53333330000007d0 Target port: designator type: Relative target port, code set: Binary transport: Serial Attached SCSI (SAS) Relative target port: 0x1 designator type: NAA, code set: Binary transport: Serial Attached SCSI (SAS) 0x52222220000007ce designator type: Target port group, code set: Binary transport: Serial Attached SCSI (SAS) Target port group: 0x100 Target device that contains addressed lu: designator type: NAA, code set: Binary transport: Serial Attached SCSI (SAS) 0x52222220000007cd designator type: SCSI name string, code set: UTF-8 transport: Serial Attached SCSI (SAS) SCSI name string: naa.52222220000007CD

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • package manager cannot create users? installation script fails?

    - by RapidWebs
    It seems that when trying to install any package which requires the creation of a system User and Group, the installer fails with the error: install: invalid user 'username' When it happened the first time, I assumed it was related to the package, or it's installation script. But now I am attempting a different package, and alas, the same issue arises! note: /etc/passwd and /etc/group is not chattr -i Here is the example output from an attempted installation (of varnish): Selecting previously unselected package varnish. Unpacking varnish (from .../varnish_3.0.2-1ubuntu0.1_amd64.deb) ... Processing triggers for man-db ... Setting up libvarnishapi1 (3.0.2-1ubuntu0.1) ... Setting up varnish (3.0.2-1ubuntu0.1) ... install: invalid user `varnish' dpkg: error processing varnish (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: varnish note: running ubuntu 12.04 For now, I have resolved the issue by manually creating the users, and running apt-get install -f, but this does not resolve the actual issue. Any ideas?

    Read the article

  • Adding user groups from a remote domain server to permissions of a remote desktop terminal server fails. why?

    - by doveyg
    I have 3 computers, two of which are servers running Windows Server 2008 and another running Windows 7. One of the servers has the following roles installed; Active Directory, DHCP and DNS. The other server has a Terminal Server role installed. I am trying to log-on to the Terminal Server via Remote Desktop using the Windows 7 machine with credentials from the Active Directory server. Sounds simple enough, right? Well, no. Whenever I try to add users or groups from the Active Directory Domain server to the Terminal Server's permissions for RDP it seems to ignore, or forget, them. Though the various methods I was able to find it either adds a strange sting of numbers after the user group or the logo to the left has a question mark on it, reopening the dialogue box replaces the user group with the name of the Domain. I am confident I have the Domain setup correctly as I am able to log-on to users in the Active Directory from other computers I have put in the Domain, and when I attempt to browse the user objects from the Domain I am prompted with a username/password field and am able to view the structure of Active Directory objects. Please advise.

    Read the article

  • Monitoring Between EC2 Regions

    - by ABrown
    I'm working on a small EC2 project that involves a handful of servers in two different regions (US East and EU West). My first task is to implement a Nagios monitoring solution. Monitoring within a region is simple - I just use the private domain names/IPs, but I'm a little unsure of the best way to handle monitoring the second region without setting up a second Nagios install. The environment is fairly static, so I'm not going to be scripting the configuration with the EC2 tools just yet. As I see it, I have two options. Two Nagios installations (which is over-kill for the small number of servers I'm dealing with). Pros: I don't have to alter the group permissions nor do I have to pay for the traffic, redundancy in the monitoring solution - I could monitor the Nagios servers. Cons: two installations to deal with and I'd need to run another server instance. Have the single installation monitor both regions. Pros: one installation to deal with. Cons: slightly reduced security - security group will have to have NRPE (5666) opened for one source IP and also paying for a small amount of bandwidth at the Internet rate for data transfer between the regions. I guess my question is - how have others handled this problem and what are your recommendations? Thanks!

    Read the article

  • htaccess rewriting all subdomains to subdirectories

    - by indorock
    I'm trying to build a catch-all for any subdomains (not captured by previous rewrite rules) for a certain domain, and serve a website from a subdirectory that resides in the same folder as the .htaccess file. I already have my vhosts.conf to send all unmapped requests to a "playground" folder, where I want to easily create new subdomains by simply adding a subfolder. So, my structure looks like this: /var/www/playground |-> /foo |-> /bar The .htacces living inside the /playground folder and /foo and /bar being seperate websites. I want http://foo.domain.com to point to /foo and http://bar.domain.com to /bar. Here is my .htaccess file: RewriteEngine On RewriteCond %{HTTP_HOST} ^([^.]+).domain.com$ [NC] RewriteCond %{REQUEST_URI} !^/%1/(.*) RewriteRule ^(.*) /%1/$1 [L] This is supposed to capture the subdomain, add it as a subfolder in RewriteRule, then append after the slash and path information. The second RewriteCond is there to prevent an infinite loop. My idea was that %1 in the second RewriteCond would be able to capture the capture group in the first RewriteCond. But so far I haven't had any success, it's always ending up in a redirect loop. If I would replace %1 in the second RewriteCond with hardcoded 'foo' or 'bar', it works, which leads me to believe that you cannot refer to a capture group inside a RewriteCond. Is is true? Or am I missing something?

    Read the article

  • In Windows 7 is there a way to login from any user account and see the same workspace and be able to use the running programs of another user?

    - by WickedMongoose
    Our group has a number of Test Stands with PCs that are currently being accessed with a single group login. It has been sent from on high that this is not the way to do things for security reasons and we all agree. However. Multiple team members from around the world log into these Test Stands and need to be able to access programs that have been run from what would be different user profiles if we were to no longer have a single common login. Is there a way to have a common workspace such that when different users login, they will be able to see and interact with all running applications as if they were using a common login? Applications that we run link to and monopolize hardware resources connected to the PC and it is time consuming to restart and reload settings every time a new user logs in. Even if the program did not monopolize the hardware many of these programs are resource intensive and require a large portion of each machine's RAM to run, so trying to run the application again when it is already running from multiple user accounts would quickly consume all system resources. Simple Example: I open a chrome browser while logged into our pc. I then logout and another team member remotes in and should be able to see my open browser and be able to interact with it as if he were the one who opened it. Any alternative process flows or solutions from someone who has gone through a similar transition would be appreciated. This is not a request for how to give all users access to the ability to run a program, but it is the request for how to allow all users access to interact with running applications that have been started by other users and need to be interacted with as if the new user started and has control of the application.

    Read the article

  • Cannot browse Visual SVN Repositories

    - by user1783560
    I went through Unable to browse repository after setting visual SVN Server and tried to implement the answer suggested but even the IP address is not solving my problem. I have 1) Turned off firewall 2) Set the directory C:\Program Files (x86)\VisualSVN Server security rights to "full control" for all groups including "Network service" group. 3) Set the directory D:\Repositories security rights to "full control" for most groups including "Network service" group. 4) Tried switching between secure and non-secure connection option in visual svn server manager. 5) Tried different with port numbers. Including 443 (with https) and 80 (http). Also tried giving random ports with/without https. Using computer name/ IP address in the url doesn't help. Windows network diagnostics - Troubleshooting report has following message for me. Security policy settings or firewall settings on this computer might be blocking the connection. = Detected Can someone guide me please? I have spent more than two weeks on this now without any solution. Thanks in advance! I appreciate your responses.

    Read the article

  • Ubuntu 9.10 won't reboot after replacing a failed drive

    - by user149041
    Hello Serverfault community. I hope someone can shed light on a peculiar problem I am having with an Ubuntu 9.10 server install. I am not a Linux expert but have the responsibility of fixing the box if something goes wrong. DOH! I have Ubuntu 9.10 server installed on on a desktop platform: Compaq Presario SR5027CL. There are two 1TB SATA drives configured in a RAID 1 array; I use the box as an email backup server for a small group of users. Last week one of the drives failed and was replaced with a new drive of the same type. The problem I have been having is getting the box to reboot after a restart or a shutdown halt. The OS and the RAID 1 array are on the same drives that make up the RAID 1 array. The replacement drive (sda) was added to the box and the partitions were created to match the existing good drive (sdb). The array is made up of sda1 and sdb1. I found an interesting point while checking the BIOS settings: there is a "HDD Boot Group Priority" section, and the new drive was selected as the "1. 3rd master"; the server wouldn't boot configured like that, but when I set the old drive to be "1. 4th master", the box will reboot. I'm checking some more things, but I would certainly appreciate any useful information. Thanks in advance.

    Read the article

  • Profile System: User share the same id

    - by Malcolm Frexner
    I have a strange effect on my site when it is under heavy load. I randomly get the properties of other users settings. I have my own implementation of the profile system so I guess I can not blame the profile system itself. I just need a point to start debugging from. I guess there is a cookie-value that maps to an Profile entry somewhere. Is there any chance to see how this mapping works? Here is my profile provider: using System; using System.Text; using System.Configuration; using System.Web; using System.Web.Profile; using System.Collections; using System.Collections.Specialized; using B2CShop.Model; using log4net; using System.Collections.Generic; using System.Diagnostics; using B2CShop.DAL; using B2CShop.Model.RepositoryInterfaces; [assembly: log4net.Config.XmlConfigurator()] namespace B2CShop.Profile { public class B2CShopProfileProvider : ProfileProvider { private static readonly ILog _log = LogManager.GetLogger(typeof(B2CShopProfileProvider)); // Get an instance of the Profile DAL using the ProfileDALFactory private static readonly B2CShop.DAL.UserRepository dal = new B2CShop.DAL.UserRepository(); // Private members private const string ERR_INVALID_PARAMETER = "Invalid Profile parameter:"; private const string PROFILE_USER = "User"; private static string applicationName = B2CShop.Model.Configuration.ApplicationConfiguration.MembershipApplicationName; /// <summary> /// The name of the application using the custom profile provider. /// </summary> public override string ApplicationName { get { return applicationName; } set { applicationName = value; } } /// <summary> /// Initializes the provider. /// </summary> /// <param name="name">The friendly name of the provider.</param> /// <param name="config">A collection of the name/value pairs representing the provider-specific attributes specified in the configuration for this provider.</param> public override void Initialize(string name, NameValueCollection config) { if (config == null) throw new ArgumentNullException("config"); if (string.IsNullOrEmpty(config["description"])) { config.Remove("description"); config.Add("description", "B2C Shop Custom Provider"); } if (string.IsNullOrEmpty(name)) name = "b2c_shop"; if (config["applicationName"] != null && !string.IsNullOrEmpty(config["applicationName"].Trim())) applicationName = config["applicationName"]; base.Initialize(name, config); } /// <summary> /// Returns the collection of settings property values for the specified application instance and settings property group. /// </summary> /// <param name="context">A System.Configuration.SettingsContext describing the current application use.</param> /// <param name="collection">A System.Configuration.SettingsPropertyCollection containing the settings property group whose values are to be retrieved.</param> /// <returns>A System.Configuration.SettingsPropertyValueCollection containing the values for the specified settings property group.</returns> public override SettingsPropertyValueCollection GetPropertyValues(SettingsContext context, SettingsPropertyCollection collection) { string username = (string)context["UserName"]; bool isAuthenticated = (bool)context["IsAuthenticated"]; //if (!isAuthenticated) return null; int uniqueID = dal.GetUniqueID(username, isAuthenticated, false, ApplicationName); SettingsPropertyValueCollection svc = new SettingsPropertyValueCollection(); foreach (SettingsProperty prop in collection) { SettingsPropertyValue pv = new SettingsPropertyValue(prop); switch (pv.Property.Name) { case PROFILE_USER: if (!String.IsNullOrEmpty(username)) { pv.PropertyValue = GetUser(uniqueID); } break; default: throw new ApplicationException(ERR_INVALID_PARAMETER + " name."); } svc.Add(pv); } return svc; } /// <summary> /// Sets the values of the specified group of property settings. /// </summary> /// <param name="context">A System.Configuration.SettingsContext describing the current application usage.</param> /// <param name="collection">A System.Configuration.SettingsPropertyValueCollection representing the group of property settings to set.</param> public override void SetPropertyValues(SettingsContext context, SettingsPropertyValueCollection collection) { string username = (string)context["UserName"]; CheckUserName(username); bool isAuthenticated = (bool)context["IsAuthenticated"]; int uniqueID = dal.GetUniqueID(username, isAuthenticated, false, ApplicationName); if (uniqueID == 0) { uniqueID = dal.CreateProfileForUser(username, isAuthenticated, ApplicationName); } foreach (SettingsPropertyValue pv in collection) { if (pv.PropertyValue != null) { switch (pv.Property.Name) { case PROFILE_USER: SetUser(uniqueID, (UserInfo)pv.PropertyValue); break; default: throw new ApplicationException(ERR_INVALID_PARAMETER + " name."); } } } UpdateActivityDates(username, false); } // Profile gettters // Retrieve UserInfo private static UserInfo GetUser(int userID) { return dal.GetUser(userID); } // Update account info private static void SetUser(int uniqueID, UserInfo user) { user.UserID = uniqueID; dal.SetUser(user); } // UpdateActivityDates // Updates the LastActivityDate and LastUpdatedDate values // when profile properties are accessed by the // GetPropertyValues and SetPropertyValues methods. // Passing true as the activityOnly parameter will update // only the LastActivityDate. private static void UpdateActivityDates(string username, bool activityOnly) { dal.UpdateActivityDates(username, activityOnly, applicationName); } /// <summary> /// Deletes profile properties and information for the supplied list of profiles. /// </summary> /// <param name="profiles">A System.Web.Profile.ProfileInfoCollection of information about profiles that are to be deleted.</param> /// <returns>The number of profiles deleted from the data source.</returns> public override int DeleteProfiles(ProfileInfoCollection profiles) { int deleteCount = 0; foreach (ProfileInfo p in profiles) if (DeleteProfile(p.UserName)) deleteCount++; return deleteCount; } /// <summary> /// Deletes profile properties and information for profiles that match the supplied list of user names. /// </summary> /// <param name="usernames">A string array of user names for profiles to be deleted.</param> /// <returns>The number of profiles deleted from the data source.</returns> public override int DeleteProfiles(string[] usernames) { int deleteCount = 0; foreach (string user in usernames) if (DeleteProfile(user)) deleteCount++; return deleteCount; } // DeleteProfile // Deletes profile data from the database for the specified user name. private static bool DeleteProfile(string username) { CheckUserName(username); return dal.DeleteAnonymousProfile(username, applicationName); } // Verifies user name for sise and comma private static void CheckUserName(string userName) { if (string.IsNullOrEmpty(userName) || userName.Length > 256 || userName.IndexOf(",") > 0) throw new ApplicationException(ERR_INVALID_PARAMETER + " user name."); } /// <summary> /// Deletes all user-profile data for profiles in which the last activity date occurred before the specified date. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are deleted.</param> /// <param name="userInactiveSinceDate">A System.DateTime that identifies which user profiles are considered inactive. If the System.Web.Profile.ProfileInfo.LastActivityDate value of a user profile occurs on or before this date and time, the profile is considered inactive.</param> /// <returns>The number of profiles deleted from the data source.</returns> public override int DeleteInactiveProfiles(ProfileAuthenticationOption authenticationOption, DateTime userInactiveSinceDate) { string[] userArray = new string[0]; dal.GetInactiveProfiles((int)authenticationOption, userInactiveSinceDate, ApplicationName).CopyTo(userArray, 0); return DeleteProfiles(userArray); } /// <summary> /// Retrieves profile information for profiles in which the user name matches the specified user names. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are returned.</param> /// <param name="usernameToMatch">The user name to search for.</param> /// <param name="pageIndex">The index of the page of results to return.</param> /// <param name="pageSize">The size of the page of results to return.</param> /// <param name="totalRecords">When this method returns, contains the total number of profiles.</param> /// <returns>A System.Web.Profile.ProfileInfoCollection containing user-profile information // for profiles where the user name matches the supplied usernameToMatch parameter.</returns> public override ProfileInfoCollection FindProfilesByUserName(ProfileAuthenticationOption authenticationOption, string usernameToMatch, int pageIndex, int pageSize, out int totalRecords) { CheckParameters(pageIndex, pageSize); return GetProfileInfo(authenticationOption, usernameToMatch, null, pageIndex, pageSize, out totalRecords); } /// <summary> /// Retrieves profile information for profiles in which the last activity date occurred on or before the specified date and the user name matches the specified user name. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are returned.</param> /// <param name="usernameToMatch">The user name to search for.</param> /// <param name="userInactiveSinceDate">A System.DateTime that identifies which user profiles are considered inactive. If the System.Web.Profile.ProfileInfo.LastActivityDate value of a user profile occurs on or before this date and time, the profile is considered inactive.</param> /// <param name="pageIndex">The index of the page of results to return.</param> /// <param name="pageSize">The size of the page of results to return.</param> /// <param name="totalRecords">When this method returns, contains the total number of profiles.</param> /// <returns>A System.Web.Profile.ProfileInfoCollection containing user profile information for inactive profiles where the user name matches the supplied usernameToMatch parameter.</returns> public override ProfileInfoCollection FindInactiveProfilesByUserName(ProfileAuthenticationOption authenticationOption, string usernameToMatch, DateTime userInactiveSinceDate, int pageIndex, int pageSize, out int totalRecords) { CheckParameters(pageIndex, pageSize); return GetProfileInfo(authenticationOption, usernameToMatch, userInactiveSinceDate, pageIndex, pageSize, out totalRecords); } /// <summary> /// Retrieves user profile data for all profiles in the data source. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are returned.</param> /// <param name="pageIndex">The index of the page of results to return.</param> /// <param name="pageSize">The size of the page of results to return.</param> /// <param name="totalRecords">When this method returns, contains the total number of profiles.</param> /// <returns>A System.Web.Profile.ProfileInfoCollection containing user-profile information for all profiles in the data source.</returns> public override ProfileInfoCollection GetAllProfiles(ProfileAuthenticationOption authenticationOption, int pageIndex, int pageSize, out int totalRecords) { CheckParameters(pageIndex, pageSize); return GetProfileInfo(authenticationOption, null, null, pageIndex, pageSize, out totalRecords); } /// <summary> /// Retrieves user-profile data from the data source for profiles in which the last activity date occurred on or before the specified date. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are returned.</param> /// <param name="userInactiveSinceDate">A System.DateTime that identifies which user profiles are considered inactive. If the System.Web.Profile.ProfileInfo.LastActivityDate of a user profile occurs on or before this date and time, the profile is considered inactive.</param> /// <param name="pageIndex">The index of the page of results to return.</param> /// <param name="pageSize">The size of the page of results to return.</param> /// <param name="totalRecords">When this method returns, contains the total number of profiles.</param> /// <returns>A System.Web.Profile.ProfileInfoCollection containing user-profile information about the inactive profiles.</returns> public override ProfileInfoCollection GetAllInactiveProfiles(ProfileAuthenticationOption authenticationOption, DateTime userInactiveSinceDate, int pageIndex, int pageSize, out int totalRecords) { CheckParameters(pageIndex, pageSize); return GetProfileInfo(authenticationOption, null, userInactiveSinceDate, pageIndex, pageSize, out totalRecords); } /// <summary> /// Returns the number of profiles in which the last activity date occurred on or before the specified date. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are returned.</param> /// <param name="userInactiveSinceDate">A System.DateTime that identifies which user profiles are considered inactive. If the System.Web.Profile.ProfileInfo.LastActivityDate of a user profile occurs on or before this date and time, the profile is considered inactive.</param> /// <returns>The number of profiles in which the last activity date occurred on or before the specified date.</returns> public override int GetNumberOfInactiveProfiles(ProfileAuthenticationOption authenticationOption, DateTime userInactiveSinceDate) { int inactiveProfiles = 0; ProfileInfoCollection profiles = GetProfileInfo(authenticationOption, null, userInactiveSinceDate, 0, 0, out inactiveProfiles); return inactiveProfiles; } //Verifies input parameters for page size and page index. private static void CheckParameters(int pageIndex, int pageSize) { if (pageIndex < 1 || pageSize < 1) throw new ApplicationException(ERR_INVALID_PARAMETER + " page index."); } //GetProfileInfo //Retrieves a count of profiles and creates a //ProfileInfoCollection from the profile data in the //database. Called by GetAllProfiles, GetAllInactiveProfiles, //FindProfilesByUserName, FindInactiveProfilesByUserName, //and GetNumberOfInactiveProfiles. //Specifying a pageIndex of 0 retrieves a count of the results only. private static ProfileInfoCollection GetProfileInfo(ProfileAuthenticationOption authenticationOption, string usernameToMatch, object userInactiveSinceDate, int pageIndex, int pageSize, out int totalRecords) { ProfileInfoCollection profiles = new ProfileInfoCollection(); totalRecords = 0; // Count profiles only. if (pageSize == 0) return profiles; int counter = 0; int startIndex = pageSize * (pageIndex - 1); int endIndex = startIndex + pageSize - 1; DateTime dt = new DateTime(1900, 1, 1); if (userInactiveSinceDate != null) dt = (DateTime)userInactiveSinceDate; /* foreach(CustomProfileInfo profile in dal.GetProfileInfo((int)authenticationOption, usernameToMatch, dt, applicationName, out totalRecords)) { if(counter >= startIndex) { ProfileInfo p = new ProfileInfo(profile.UserName, profile.IsAnonymous, profile.LastActivityDate, profile.LastUpdatedDate, 0); profiles.Add(p); } if(counter >= endIndex) { break; } counter++; } */ return profiles; } } } This is how I use it in the controller: public ActionResult AddTyreToCart(CartViewModel model) { string profile = Request.IsAuthenticated ? Request.AnonymousID : User.Identity.Name; } I would like to debug: How can 2 users who provide different cookies get the same profileid? EDIT Here is the code for getuniqueid public int GetUniqueID(string userName, bool isAuthenticated, bool ignoreAuthenticationType, string appName) { SqlParameter[] parms = { new SqlParameter("@Username", SqlDbType.VarChar, 256), new SqlParameter("@ApplicationName", SqlDbType.VarChar, 256)}; parms[0].Value = userName; parms[1].Value = appName; if (!ignoreAuthenticationType) { Array.Resize(ref parms, parms.Length + 1); parms[2] = new SqlParameter("@IsAnonymous", SqlDbType.Bit) { Value = !isAuthenticated }; } int userID; object retVal = null; retVal = SqlHelper.ExecuteScalar(ConfigurationManager.ConnectionStrings["SQLOrderB2CConnString"].ConnectionString, CommandType.StoredProcedure, "getProfileUniqueID", parms); if (retVal == null) userID = CreateProfileForUser(userName, isAuthenticated, appName); else userID = Convert.ToInt32(retVal); return userID; } And this is the SP: CREATE PROCEDURE [dbo].[getProfileUniqueID] @Username VarChar( 256), @ApplicationName VarChar( 256), @IsAnonymous bit = null AS BEGIN SET NOCOUNT ON; /* [getProfileUniqueID] created 08.07.2009 mf Retrive unique id for current user */ SELECT UniqueID FROM dbo.Profiles WHERE Username = @Username AND ApplicationName = @ApplicationName AND IsAnonymous = @IsAnonymous or @IsAnonymous = null END

    Read the article

  • .NET Regex: Howto extract IPv6 address parts

    - by Quandary
    Question: How does the .NET regex string to extract IPv6 addresses look like ? I can get it to extract a simple IPv6 address like "1050:0:0:0:5:600:300c:326b" but not the colon format ("ff06::c3"); My problem is, it should extract a 0 for every omitted value between the :: How do I do that? Below my code + description. Specify IPv6 addresses by omitting leading zeros. For example, IPv6 address 1050:0000:0000:0000:0005:0600:300c:326b may be written as 1050:0:0:0:5:600:300c:326b. Double colon Specify IPv6 addresses by using double colons (::) in place of a series of zeros. For example, IPv6 address ff06:0:0:0:0:0:0:c3 may be written as ff06::c3. Double colons may be used only once in an IP address. strInputString = "ff06::c3"; strInputString = "1050:0000:0000:0000:0005:0600:300c:326b"; string strPattern = "([A-Fa-f0-9]{1,4}:){7}([A-Fa-f0-9]{1,4})"; //strPattern = @"\A(?:[0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}\z"; //strPattern = @"(\A([0-9a-f]{1,4}:){1,1}(:[0-9a-f]{1,4}){1,6}\Z)|(\A([0-9a-f]{1,4}:){1,2}(:[0-9a-f]{1,4}){1,5}\Z)|(\A([0-9a-f]{1,4}:){1,3}(:[0-9a-f]{1,4}){1,4}\Z)|(\A([0-9a-f]{1,4}:){1,4}(:[0-9a-f]{1,4}){1,3}\Z)|(\A([0-9a-f]{1,4}:){1,5}(:[0-9a-f]{1,4}){1,2}\Z)|(\A([0-9a-f]{1,4}:){1,6}(:[0-9a-f]{1,4}){1,1}\Z)|(\A(([0-9a-f]{1,4}:){1,7}|:):\Z)|(\A:(:[0-9a-f]{1,4}){1,7}\Z)|(\A((([0-9a-f]{1,4}:){6})(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3})\Z)|(\A(([0-9a-f]{1,4}:){5}[0-9a-f]{1,4}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3})\Z)|(\A([0-9a-f]{1,4}:){5}:[0-9a-f]{1,4}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)|(\A([0-9a-f]{1,4}:){1,1}(:[0-9a-f]{1,4}){1,4}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)|(\A([0-9a-f]{1,4}:){1,2}(:[0-9a-f]{1,4}){1,3}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)|(\A([0-9a-f]{1,4}:){1,3}(:[0-9a-f]{1,4}){1,2}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)|(\A([0-9a-f]{1,4}:){1,4}(:[0-9a-f]{1,4}){1,1}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)|(\A(([0-9a-f]{1,4}:){1,5}|:):(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)|(\A:(:[0-9a-f]{1,4}){1,5}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z) "; //strPattern = @"/^\s*((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:)))(%.+)?\s*$/"; //strPattern = @"(:?[0-9a-fA-F]{1,4}:){7}([0-9a-fA-F]{1,4})\z"; //strPattern = @"\A((?:[0-9A-Fa-f]{1,4}(?::[0-9A-Fa-f]{1,4})*)?)::((?:[0-9A-Fa-f]{1,4}(?::[0-9A-Fa-f]{1,4})*)?)\z"; //strPattern = @"\A((?:[0-9A-Fa-f]{1,4}(?::[0-9A-Fa-f]{1,4})*)?)::((?:[0-9A-Fa-f]{1,4}:)*)(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\z"; //strPattern = @"/^(?:(?:(?:(?:[a-f0-9]{1,4}(?::[a-f0-9]{1,4}){7})|(?:(?!(?:.*[a-f0-9](?::|$)){7,})(?:[a-f0-9]{1,4}(?::[a-f0-9]{1,4}){0,5})?::(?:[a-f0-9]{1,4}(?::[a-f0-9]{1,4}){0,5})?)))|(?:(?:(?:[a-f0-9]{1,4}(?::[a-f0-9]{1,4}){5}:)|(?:(?!(?:.*[a-f0-9]:){5,})(?:[a-f0-9]{1,4}(?::[a-f0-9]{1,4}){0,3})?::(?:[a-f0-9]{1,4}(?::[a-f0-9]{1,4}){0,3}:)?))?(?:(?:25[0-5])|(?:2[0-4][0-9])|(?:1[0-9]{2})|(?:[1-9]?[0-9]))(?:\.(?:(?:25[0-5])|(?:2[0-4][0-9])|(?:1[0-9]{2})|(?:[1-9]?[0-9]))){3}))$/i"; System.Text.RegularExpressions.Regex reValidationRule = new System.Text.RegularExpressions.Regex("^" + strPattern + "$"); if (reValidationRule.Match(strInputString).Success) // If matching pattern { System.Text.RegularExpressions.Match maResult = System.Text.RegularExpressions.Regex.Match(strInputString, strPattern); // Console.WriteLine(maResult.Groups.Count) string[] astrReturnValues = new string[4]; System.Text.RegularExpressions.GroupCollection gc = maResult.Groups; System.Text.RegularExpressions.CaptureCollection cc; int counter; //System.Web.Script.Serialization.JavaScriptSerializer jssJSONserializer = new System.Web.Script.Serialization.JavaScriptSerializer(); //Console.WriteLine(jssJSONserializer.Serialize()); // Loop through each group. for (int i = 0; i < gc.Count; i++) { Console.WriteLine("Group: {0}", i); cc = gc[i].Captures; counter = cc.Count; // Print number of captures in this group. Console.WriteLine("Captures count = " + counter.ToString()); // Loop through each capture in group. for (int ii = 0; ii < counter; ii++) { Console.WriteLine("Capture: {0}", ii); // Print capture and position. Console.WriteLine(cc[ii] + " Starts at character " + cc[ii].Index); } }

    Read the article

  • Committed JDO writes do not apply on local GAE HRD, or possibly reused transaction

    - by eeeeaaii
    I'm using JDO 2.3 on app engine. I was using the Master/Slave datastore for local testing and recently switched over to using the HRD datastore for local testing, and parts of my app are breaking (which is to be expected). One part of the app that's breaking is where it sends a lot of writes quickly - that is because of the 1-second limit thing, it's failing with a concurrent modification exception. Okay, so that's also to be expected, so I have the browser retry the writes again later when they fail (maybe not the best hack but I'm just trying to get it working quickly). But a weird thing is happening. Some of the writes which should be succeeding (the ones that DON'T get the concurrent modification exception) are also failing, even though the commit phase completes and the request returns my success code. I can see from the log that the retried requests are working okay, but these other requests that seem to have committed on the first try are, I guess, never "applied." But from what I read about the Apply phase, writing again to that same entity should force the apply... but it doesn't. Code follows. Some things to note: I am attempting to use automatic JDO caching. So this is where JDO uses memcache under the covers. This doesn't actually work unless you wrap everything in a transaction. all the requests are doing is reading a string out of an entity, modifying part of the string, and saving that string back to the entity. If these requests weren't in transactions, you'd of course have the "dirty read" problem. But with transactions, isolation is supposed to be at the level of "serializable" so I don't see what's happening here. the entity being modified is a root entity (not in a group) I have cross-group transactions enabled Another weird thing is happening. If the concurrent modification thing happens, and I subsequently edit more than 5 more entities (this is the max for cross-group transactions), then nothing happens right away, but when I stop and restart the server I get "IllegalArgumentException: operating on too many entity groups in a single transaction". Could it be possible that the PMF is returning the same PersistenceManager every time, or the PM is reusing the same transaction every time? I don't see how I could possibly get the above error otherwise. The code inside the transaction just edits one root entity. I can't think of any other way that GAE would give me the "too many entity groups" error. The relevant code (this is a simplified version) PersistenceManager pm = PMF.getManager(); Transaction tx = pm.currentTransaction(); String responsetext = ""; try { tx.begin(); // I have extra calls to "makePersistent" because I found that relying // on pm.close didn't always write the objects to cache, maybe that // was only a DataNucleus 1.x issue though Key userkey = obtainUserKeyFromCookie(); User u = pm.getObjectById(User.class, userkey); pm.makePersistent(u); // to make sure it gets cached for next time Key mapkey = obtainMapKeyFromQueryString(); // this is NOT a java.util.Map, just FYI Map currentmap = pm.getObjectById(Map.class, mapkey); Text mapData = currentmap.getMapData(); // mapData is JSON stored in the entity Text newMapData = parseModifyAndReturn(mapData); // transform the map currentmap.setMapData(newMapData); // mutate the Map object pm.makePersistent(currentmap); // make sure to persist so there is a cache hit tx.commit(); responsetext = "OK"; } catch (JDOCanRetryException jdoe) { // log jdoe responsetext = "RETRY"; } catch (Exception e) { // log e responsetext = "ERROR"; } finally { if (tx.isActive()) { tx.rollback(); } pm.close(); } resp.getWriter().println(responsetext); EDIT: so I have verified that it fails after exactly 5 transactions. Here's what I do: I create a Foo (root entity), do a bunch of concurrent operations on that Foo, and some fail and get retried, and some commit but don't apply (as described above). Then, I start creating more Foos, and do a few operations on those new Foos. If I only create four Foos, stopping and restarting app engine does NOT give me the IllegalArgumentException. However if I create five Foos (which is the limit for cross-group transactions), then when I stop and restart app engine, I do get the exception. So it seems that somehow these new Foos I am creating are counting toward the limit of 5 max entities per transaction, even though they are supposed to be handled by separate transactions. It's as if a transaction is still open and is being reused by the servlet when it handles the new requests for the 2nd through 5th Foos. EDIT2: it looks like the IllegalArgument thing is independent of the other bug. In other words, it always happens when I create five Foos, even if I don't get the concurrent modification exception. I don't know if it's a symptom of the same problem or if it's unrelated. EDIT3: I found out what was causing the (unrelated) IllegalArgumentException, it was a dumb mistake on my part. But the other issue is still happening. EDIT4: added pseudocode for the datastore access EDIT5: I am pretty sure I know why this is happening, but I will still award the bounty to anyone who can confirm it. Basically, I think the problem is that transactions are not really implemented in the local version of the datastore. References: https://groups.google.com/forum/?fromgroups=#!topic/google-appengine-java/gVMS1dFSpcU https://groups.google.com/forum/?fromgroups=#!topic/google-appengine-java/deGasFdIO-M https://groups.google.com/forum/?hl=en&fromgroups=#!msg/google-appengine-java/4YuNb6TVD6I/gSttMmHYwo0J Because transactions are not implemented, rollback is essentially a no-op. Therefore, I get a dirty read when two transactions try to modify the record at the same time. In other words, A reads the data and B reads the data at the same time. A attempts to modify the data, and B attempts to modify a different part of the data. A writes to the datastore, then B writes, obliterating A's changes. Then B is "rolled back" by app engine, but since rollbacks are a no-op when running on the local datastore, B's changes stay, and A's do not. Meanwhile, since B is the thread that threw the exception, the client retries B, but does not retry A (since A was supposedly the transaction that succeeded).

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • Cisco PIX 8.0.4, static address mapping not working?

    - by Bill
    upgrading a working Pix running 5.3.1 to 8.0.4. The memory/IOS upgrade went fine, but the 8.0.4 configuration is not quite working 100%. The 5.3.1 config on which it was based is working fine. Basically, I have three networks (inside, outside, dmz) with some addresses on the dmz statically mapped to outside addresses. The problem seems to be that those addresses can't send or receive traffic from the outside (Internet.) Stuff on the DMZ that does not have a static mapping seems to work fine. So, basically: Inside - outside: works Inside - DMZ: works DMZ - inside: works, where the rules allow it DMZ (non-static) - outside: works But: DMZ (static) - outside: fails Outside - DMZ: fails (So, say, udp 1194 traffic to .102, http to .104) I suspect there's something I'm missing with the nat/global section of the config, but can't for the life of me figure out what. Help, anyone? The complete configuration is below. Thanks for any thoughts! ! PIX Version 8.0(4) ! hostname firewall domain-name asasdkpaskdspakdpoak.com enable password xxxxxxxx encrypted passwd xxxxxxxx encrypted names ! interface Ethernet0 nameif outside security-level 0 ip address XX.XX.XX.100 255.255.255.224 ! interface Ethernet1 nameif inside security-level 100 ip address 192.168.68.1 255.255.255.0 ! interface Ethernet2 nameif dmz security-level 10 ip address 192.168.69.1 255.255.255.0 ! boot system flash:/image.bin ftp mode passive dns server-group DefaultDNS domain-name asasdkpaskdspakdpoak.com access-list acl_out extended permit udp any host XX.XX.XX.102 eq 1194 access-list acl_out extended permit tcp any host XX.XX.XX.104 eq www access-list acl_dmz extended permit tcp host 192.168.69.10 host 192.168.68.17 eq ssh access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 192.168.68.0 255.255.255.0 eq ssh access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 192.168.68.0 255.255.255.0 eq 5901 access-list acl_dmz extended permit udp host 192.168.69.103 any eq ntp access-list acl_dmz extended permit udp host 192.168.69.103 any eq domain access-list acl_dmz extended permit tcp host 192.168.69.103 any eq www access-list acl_dmz extended permit tcp host 192.168.69.100 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.100 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.101 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.101 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.104 host 192.168.68.101 eq 3306 access-list acl_dmz extended permit tcp host 192.168.69.104 host 192.168.68.102 eq 3306 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.69.104 eq 8080 access-list acl_dmz extended permit tcp 10.71.83.0 255.255.255.0 host 192.168.69.104 eq 8099 access-list acl_dmz extended permit tcp host 192.168.69.105 any eq www access-list acl_dmz extended permit tcp host 192.168.69.103 any eq smtp access-list acl_dmz extended permit tcp host 192.168.69.105 host 192.168.68.103 eq ssh access-list acl_dmz extended permit tcp host 192.168.69.104 any eq www access-list acl_dmz extended permit tcp host 192.168.69.100 any eq www access-list acl_dmz extended permit tcp host 192.168.69.100 any eq https pager lines 24 mtu outside 1500 mtu inside 1500 mtu dmz 1500 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 static (dmz,outside) XX.XX.XX.103 192.168.69.11 netmask 255.255.255.255 static (inside,dmz) 192.168.68.17 192.168.68.17 netmask 255.255.255.255 static (inside,dmz) 192.168.68.100 192.168.68.100 netmask 255.255.255.255 static (inside,dmz) 192.168.68.101 192.168.68.101 netmask 255.255.255.255 static (inside,dmz) 192.168.68.102 192.168.68.102 netmask 255.255.255.255 static (inside,dmz) 192.168.68.103 192.168.68.103 netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.104 192.168.69.100 netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.105 192.168.69.105 netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.102 192.168.69.10 netmask 255.255.255.255 access-group acl_out in interface outside access-group acl_dmz in interface dmz route outside 0.0.0.0 0.0.0.0 XX.XX.XX.97 1 route dmz 10.71.83.0 255.255.255.0 192.168.69.10 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute dynamic-access-policy-record DfltAccessPolicy no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet 192.168.68.17 255.255.255.255 inside telnet timeout 5 ssh timeout 5 console timeout 0 threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp ! service-policy global_policy global prompt hostname context Cryptochecksum:2d1bb2dee2d7a3e45db63a489102d7de

    Read the article

  • Print driver installs failing

    - by Kasius
    All of the Windows 7 64-bit Enterprise machines in my organization are failing to install a good number of printer drivers that previously installed without issue. This only happens with printer drivers. And not with all printer drivers. Just some. Network drivers, video drivers, etc. have had no problems. Here is part of setupapi.dev.log for a Dymo LabelWriter printer driver that is failing to install: dvi: {Plug and Play Service: Device Install for USBPRINT\DYMOLABELWRITER_450_TURBO\6&538F51D&0&USB001} ump: Creating Install Process: DrvInst.exe 09:36:58.071 ndv: Infpath=C:\Windows\INF\oem0.inf ndv: DriverNodeName=dymo.inf:DYMO.NTamd64.6.0:LW_450_TURBO_VISTA:8.1.0.363:usbprint\dymolabelwriter_450_aa08 ndv: DriverStorepath=C:\Windows\System32\DriverStore\FileRepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf ndv: Building driver list from driver node strong name... dvi: Searching for hardware ID(s): dvi: usbprint\dymolabelwriter_450_aa08 dvi: dymolabelwriter_450_aa08 inf: Opened PNF: 'C:\Windows\System32\DriverStore\FileRepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf' ([strings]) dvi: Selected driver installs from section [LW_450_TURBO_VISTA] in 'c:\windows\system32\driverstore\filerepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf'. dvi: Class GUID of device changed to: {4d36e979-e325-11ce-bfc1-08002be10318}. dvi: Set selected driver complete. ndv: {Core Device Install} 09:36:58.133 inf: Opened INF: 'C:\Windows\INF\oem0.inf' ([strings]) inf: Saved PNF: 'C:\Windows\INF\oem0.PNF' (Language = 0409) dvi: {DIF_ALLOW_INSTALL} 09:36:58.164 dvi: Using exported function 'ClassInstall32' in module 'C:\Windows\system32\ntprint.dll'. dvi: Class installer == ntprint.dll,ClassInstall32 dvi: No CoInstallers found dvi: Class installer: Enter 09:36:58.164 dvi: Class installer: Exit dvi: Default installer: Enter 09:36:58.180 dvi: Default installer: Exit dvi: {DIF_ALLOW_INSTALL - exit(0xe000020e)} 09:36:58.180 ndv: Installing files... dvi: {DIF_INSTALLDEVICEFILES} 09:36:58.180 dvi: Class installer: Enter 09:36:58.180 inf: Opened INF: 'C:\Windows\System32\DriverStore\FileRepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf' ([strings]) inf: Opened INF: 'C:\Windows\System32\DriverStore\FileRepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf' ([strings]) !!! dvi: Class installer: failed(0x00000490)! !!! dvi: Error 1168: Element not found. dvi: {DIF_INSTALLDEVICEFILES - exit(0x00000490)} 09:37:22.063 ndv: Device install status=0x00000490 ndv: Performing device install final cleanup... ! ndv: Queueing up error report since device installation failed... ndv: {Core Device Install - exit(0x00000490)} 09:37:22.063 dvi: {DIF_DESTROYPRIVATEDATA} 09:37:22.063 dvi: Class installer: Enter 09:37:22.063 dvi: Class installer: Exit dvi: Default installer: Enter 09:37:22.063 dvi: Default installer: Exit dvi: {DIF_DESTROYPRIVATEDATA - exit(0xe000020e)} 09:37:22.063 ump: Server install process exited with code 0x00000490 09:37:22.063 ump: {Plug and Play Service: Device Install exit(00000490)} Notice these lines in particular: !!! dvi: Class installer: failed(0x00000490)! !!! dvi: Error 1168: Element not found. dvi: {DIF_INSTALLDEVICEFILES - exit(0x00000490)} 09:37:22.063 ndv: Device install status=0x00000490 From what I have read, the "Element not found" error should be accompanied by an event describing what element was not found. The error that appears in Device Manager is "The driver cannot be installed because it is either not digitally signed or not signed in the appropriate manner." It appears to be signed fine though. It has an accompanying .CAT file and worked previously. And when installing, the following messages are logged in setupapi.dev.log: sto: {DRIVERSTORE_IMPORT_NOTIFY_VALIDATE} 09:36:56.277 inf: Opened INF: 'C:\Windows\System32\DriverStore\Temp\{272e2305-961c-7942-9ede-966f01047043}\dymo.inf' ([strings]) sig: {_VERIFY_FILE_SIGNATURE} 09:36:56.292 sig: Key = dymo.inf sig: FilePath = C:\Windows\System32\DriverStore\Temp\{272e2305-961c-7942-9ede-966f01047043}\dymo.inf sig: Catalog = C:\Windows\System32\DriverStore\Temp\{272e2305-961c-7942-9ede-966f01047043}\DYMO.CAT sig: Success: File is signed in catalog. sig: {_VERIFY_FILE_SIGNATURE exit(0x00000000)} 09:36:56.355 sto: Validating driver package files against catalog 'DYMO.CAT'. sto: Driver package is valid. sto: {DRIVERSTORE_IMPORT_NOTIFY_VALIDATE exit(0x00000000)} 09:36:56.402 sto: Verified driver package signature: sto: Digital Signer Score = 0x0D000005 sto: Digital Signer Name = Microsoft Windows Hardware Compatibility Publisher Now here's where it gets strange. If I take it off the domain, it installs fine. But it doesn't seem to have anything to do with Group Policy. I moved the machine to an OU that blocks inheritance, ran a gpupdate, ran rsop.msc to verify, and tried again. And it still didn't work. Likewise, I removed a machine from the domain, manually set all of the domain Group Policy settings in gpedit.msc, and tried that way, and it worked fine. So it seems like the Group Policy settings are irrelevant. What other domain-related issue could be causing this though? Any ideas on what to try next would be greatly appreciated. I'm not sure where to go from here. Thanks!

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >