Search Results

Search found 11331 results on 454 pages for 'business aspect'.

Page 427/454 | < Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >

  • ASA5505 Novice. Setting up Outside/Inside/and DMZ as Guest Network

    - by GriffJ
    I need a little help in developing a config for our ASA5505. I'm an MCSA/MCITPAS but I don't have a lot of practical cisco experience. Here is what I need help with, we currently have a PIX as our boarder gateway and well it's antiquated and it only has a 50 user license which means I'm constantly clearing local-host throughout the day as people complain. I discovered that the last IT person bought at couple ASA5505s and they've been sitting in the back of a cupboard. So far I've duplicated the configuration from the pix to the asa but as I was going to be going this far I thought I'd go further and remove another old cisco router that was used only for the guest network, I know the asa can do both jobs. So I'm going to paste a scenario I wrote up with the actual IPs changed to protect the innocent. ... Outside Network: 1.2.3.10 255.255.255.248 (we have a /29) Inside Network: 10.10.36.0 255.255.252.0 DMZ Network: 192.168.15.0 255.255.255.0 Outside Network on e0/0 DMZ Network on e0/1 Inside Network on e0/2-7 DMZ Network has DHCPD Enabled. DMZ DHCPD Pool is 192.168.15.50-192.168.15.250 DMZ Network needs to be able to see DNS on Inside Network at 10.10.37.11 and 10.10.37.12 DMZ Network needs to be able to access webmail on inside network at 10.10.37.15 DMZ Network needs to be able to access business website on inside network at 10.10.37.17 DMZ Network needs to be able to access the outside network (access to the internet). Inside Network has NO DHCPD. (dhcp is handled by domain controller) Inside Network needs to be able to see anything on the DMZ network. Inside Network needs to be able to access the outside network (access to the internet). There is some access-list stuff already, some static mapping already. Maps external IPs from our ISP to our inside server IPs static (inside,outside) 1.2.3.11 10.10.37.15 netmask 255.255.255.255 static (inside,outside) 1.2.3.12 10.10.37.17 netmask 255.255.255.255 static (inside,outside) 1.2.3.13 10.10.37.20 netmask 255.255.255.255 Allows access to our Webserver/Mailserver/VPN from the Outside. access-list 108 permit tcp any host 1.2.3.11 eq https access-list 108 permit tcp any host 1.2.3.11 eq smtp access-list 108 permit tcp any host 1.2.3.11 eq 993 access-list 108 permit tcp any host 1.2.3.11 eq 465 access-list 108 permit tcp any host 1.2.3.12 eq www access-list 108 permit tcp any host 1.2.3.12 eq https access-list 108 permit tcp any host 1.2.3.13 eq pptp Here is all the NAT and route stuff I have so far. global (outside) 1 interface global (outside) 2 1.2.3.11-1.2.3.14 netmask 255.255.255.248 nat (inside) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 route outside 0.0.0.0 0.0.0.0 1.2.3.9 1

    Read the article

  • New-ManagedContentSettings - not working properly under Exchange 2010

    - by mfinni
    I have a client that is divesting a business unit into a new AD forest, Exchange org, etc. We're using Quest tools to migrate users and mailboxes. However, I have to build the new infrastructure to match the old one. In the old one, we're using Managed Folder Mailbox Policies to limit (or allow) retention. They started with Exchange 2007 and never upgraded to Retention Policies; oh well. So, in the old environment, when you use a 2007 server to define a new Managed Content Setting, you can pick "Email" from the dropdown for MessageClass. This is a display name; the actual MessageClass values are thus: MessageClass : IPM.Note;IPM.Note.AS/400 Move Notification Form v1.0;IPM.Note.Delayed;IPM.Note.Exchange.ActiveSync.Report;IPM.Note.JournalReport.Msg;IPM.Note.JournalReport.Tnef;IPM.Note.Microsoft.Missed.Voice;IPM.Note.Rules.OofTemplate.Microsoft;IPM.Note.Rules.ReplyTemplate.Microsoft;IPM.Note.Secure.Sign;IPM.Note.SMIME;IPM.Note.SMIME.MultipartSigned;IPM.Note.StorageQuotaWarning;IPM.Note.StorageQuotaWarning.Warning;IPM.Notification.Meeting.Forward;IPM.Outlook.Recall;IPM.Recall.Report.Success;IPM.Schedule.Meeting.*;REPORT.IPM.Note.NDR If I take that and try to mangle it into a new cmdlet for Ex2010 in my new environment here's what I get New-ManagedContentSettings -Name "Delete Messages older then 90 days" -FolderName "Entire Mailbox" -RetentionEnabled $True -AgeLimitForRetention 90 -TriggerForRetention WhenDelivered -RetentionAction DeleteAndAllowRecovery -MessageClass "IPM.Note","IPM.Note.AS/400MoveNotificationFormv1.0","IPM.Note.Delayed","IPM.Note.Exchange.ActiveSync.Report","IPM.Note.JournalReport.Msg","IPM.Note.JournalReport.Tnef","IPM.Note.Microsoft.Missed.Voice","IPM.Note.Rules.OofTemplate.Microsoft","IPM.Note.Rules.ReplyTemplate.Microsoft","IPM.Note.Secure.Sign","IPM.Note.SMIME","IPM.Note.SMIME.MultipartSigned","IPM.Note.StorageQuotaWarning","IPM.Note.StorageQuotaWarning.Warning","IPM.Notification.Meeting.Forward","IPM.Outlook.Recall","IPM.Recall.Report.Success","IPM.Schedule.Meeting.*","REPORT.IPM.Note.NDR" -whatif Invoke-Command : Cannot bind parameter 'MessageClass' to the target. Exception setting "MessageClass": "The length of t he property is too long. The maximum length is 255 and the length of the value provided is 518." At C:\Users\MFinnigan.sa\AppData\Roaming\Microsoft\Exchange\RemotePowerShell\pfexcas02.fve.ad.5ssl.com\pfexcas02.fve.ad .5ssl.com.psm1:28204 char:29 + $scriptCmd = { & <<<< $script:InvokeCommand ` + CategoryInfo : WriteError: (:) [New-ManagedContentSettings], ParameterBindingException + FullyQualifiedErrorId : ParameterBindingFailed,Microsoft.Exchange.Management.SystemConfigurationTasks.NewManaged ContentSettings So, the config object can store all that mess, but I can't fit it in through the cmdlet to create the object. Lovely. Any ideas?

    Read the article

  • VPN Connection Causes Internal LAN Connection Loss with Server

    - by sleepisfortheweak
    I've tried configuring basic PPTP VPN at my small business using a number of different tutorials. As far as I can tell, the actual VPN connection worked fine, but upon connecting a client, the Server 'disappears' from the internal LAN. The RRAS service must be stopped before the connection is restored. My Setup: The network is simply a DSL Gateway/Router to the outside functioning as NAT/Firewall/DHCP. The server is a Win Server 2008 machine at fixed IP 192.168.1.200. The server has 1 NIC, so I used the 'custom' option when configuring RRAS. The RRAS settings should be default except that I've disabled ports for connection types I'm not using and reduced PPTP ports to 10. I've also created an address pool and disabled DHCP packet forwarding. The server only functions as a File Share and now a VPN Server. Local LAN computers all have mapped network shares to the server authenticated based on Local User/Group setup on the server. The Problem: The moment a client connects through VPN, the server 'disappears' from the local network. All mapped drives disconnect and there is no response to a ping 192.168.1.200. Even if the client disconnects, the server does not re-appear at that address until the RRAS service is stopped. I've Tried: Using an Address Pool inside and outside the local subnet. Using DCHP Relay Checking Inbound/Outbound filters (none enabled) The fact that nothing I've tried has had any effect, and that I can connect and successfully obtain an IP tells me that it's something more fundamental I'm missing. My gut tells me that it's something to do with the second IP address added by the VPN client somehow taking over the interface or traffic from the local LAN accidently getting routed to the VPN client instead of handled at the server once RRAS has become 'active' when a client connects. Hopefully this may be obvious to someone with real IT experience. I've been doing this a while and almost never been stumped. I'm starting to think it might actually be something tricky since my setup is pretty basic yet refuses to work. I'll be happy to include more info if this doesn't ring any bells right away for anyone. Thanks

    Read the article

  • Load balancers, multiple data centers and url based routing

    - by kunkunur
    There is one data center - dc1. There is a business need to setup another data center - dc2 in another geography and there might be more in the future say dc3. Within the data center dc1: There are two web servers say WS1 and WS2. These two webservers do not share anything currently. There isnt any necessity foreseen to have more webservers within each dc. dc1 also has a local load balancer which has been setup with session stickiness. So if a user say u1 lands on dc1 and if the load balancer decides to route his first request to WS1 then from there on all u1's requests will get routed to WS1. Local load balancer and webservers are invisible to the user. Local load balancer listens to the traffic on a virtual ip which is assigned to the virtual cluster of webservers ws1 and ws2. Virtual ip is the ip to which the host name is resolved to in the DNS. There are no client specific subdomains as of now instead there is a client specific url(context). ex: www.example.com/client1 and www.example.com/client2. Given above when dc2 is onboarded I want to route the traffic between dc1 and dc2 based on the client. The options that I have found so far are. Have client specific subdomains e.g. client1.example.com and client2.example.com and assign each of them with the virtual ip of the data center to which I want to route them. or Assign www.example.com and www1.example.com to first dc i.e. dc1 and assign www2.example.com to dc2. All requests will first get routed to dc1 where WS1 and WS2 will redirect the user to www1.example.com or www2.example.com based on whether the url ends with /client1 or /client2. I need help in the following If I setup a global load balancer between dc1 and dc2 do I have any alternative solutions. That is, can a global load balancer route the traffic based on the url ? Are there drawbacks to subdomain based solutions compared to www1 solution? With www1 solution I am worried that it creates a dependency on dc1 atleast for the first request and the user will see that he is getting redirected to a different url.

    Read the article

  • Resetting root password on Fedora Core 3 - serial cable access only

    - by Sensible Eddie
    A little background: We have an old rackmount server running a customised version of Fedora, manufactured by a company called Navaho. The server is a TeamCAT, running some proprietary rubbish called Freedom2. We have to keep it going - the alternative is extraordinarily expensive, and the business is not likely to be running much longer to justify changing things. Through one means or another, it has fallen upon me to try and resolve our lack of root access. The previous admin has fallen under the proverbial bus, and nobody has any clue. We have no access to the root account for this server. ssh is running on the server, and there is one account admin that we can login with, however it has no permission to do anything (ironic...) The only other way into the server is with a null-modem serial cable. This works... up to a point. I can see the BIOS, I can see the post BIOS screen, and then I see "Starting grub", followed by another screen with about four lines of Linux information, but then it stops at that point. The server continues booting, and all services come online after around two minutes, but the serial terminal displays no more information. I understand it is possible to put Linux into "single user mode" to reset a root password, but I have no idea how to do this beyond trying to interrupt it at the grub stage listed above. When I have tried it just froze. It was almost like grub had appeared (since the server did not continue booting) but I couldn't see it on the serial terminal. Which made me think maybe the grub screen has some different serial settings? I don't know... it's the first time I've ever used serial for access! A friend of mine suggested trying to use a Fedora boot CD. We could boot from USB, so something along this approach is possible but again we still can only see what's going on with the serial terminal, so it might not be achievable. Does anyone have any suggestions for things I can try? I appreciate this is a bit of a long shot, but any assistance would be invaluable. *UPDATE 1 - 28/8/12 * - we will be making some attempts on this today and will post further details later!

    Read the article

  • Beginner server local installation

    - by joanjgm
    Here's the thing I own a small business and currently my emails are being managed by some regular hosting using cpanel and that I bought a small server and installed windows server and exchange Can you tell what I did wrong here Installed and configured my current existing domain Configured all email address Installed noip in case my public address change In the cpanel of the domain I've added an MX record to the noip domain of the server with priority 0 so now emails are being received by my own server Now whenever I send an email to anyone gmail hotmail etc I get a response that cannot be delivered since may be junk This didn't happen when I sent emails from the hosting What's missing what did I do wrong heres the code mx.google.com rejected your message to the following e-mail addresses: Joan J. Guerra Makaren ([email protected]) mx.google.com gave this error: [186.88.202.13 12] Our system has detected that this message is likely unsolicited mail. To reduce the amount of spam sent to Gmail, this message has been blocked. Please visit http://support.google.com/mail/bin/answer.py?hl=en&answer=188131 for more information. cn9si815432vcb.71 - gsmtp Your message wasn't delivered due to a permission or security issue. It may have been rejected by a moderator, the address may only accept e-mail from certain senders, or another restriction may be preventing delivery. Diagnostic information for administrators: Generating server: SERVERMEGA.megaconstrucciones.com.ve [email protected] mx.google.com #550-5.7.1 [186.88.202.13 12] Our system has detected that this message is 550-5.7.1 likely unsolicited mail. To reduce the amount of spam sent to Gmail, 550-5.7.1 this message has been blocked. Please visit 550-5.7.1 http://support.google.com/mail/bin/answer.py?hl=en&answer=188131 for 550 5.7.1 more information. cn9si815432vcb.71 - gsmtp ## Original message headers: Received: from SERVERMEGA.megaconstrucciones.com.ve ([fe80::9096:e9c2:405b:6112]) by SERVERMEGA.megaconstrucciones.com.ve ([fe80::9096:e9c2:405b:6112%10]) with mapi; Thu, 29 May 2014 11:32:19 -0430 From: prueba <[email protected]> To: "Joan J. Guerra Makaren" <[email protected]> Subject: Probando correos Thread-Topic: Probando correos Thread-Index: Ac97V1eW4OBFmoqJTRGoD7IPTC2azg== Date: Thu, 29 May 2014 16:04:35 +0000 Message-ID: <[email protected]> Accept-Language: en-US, es-VE Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: Content-Type: multipart/alternative; boundary="_000_000f42494487966276f7b241megaconstruccionescomve_" MIME-Version: 1.0

    Read the article

  • Cloud storage provider lost my data. How to back up next time?

    - by tomcam
    What do you do when cloud storage fails you? First, some background. A popular cloud storage provider (rhymes with Booger Link) damaged a bunch of my data. Getting it back was an uphill battle with all the usual accusations that it was my fault, etc. Finally I got the data back. Yes, I can back this up with evidence. Idiotically, I stayed with them, so I totally get that the rest of this is on me. The problem had been with a shared folder that works with all 12 computers my business and family use with the service. We'll call that folder the Tragic Briefcase. It is a sort of global folder that's publicly visible to all computers on the service. It's our main repository. Today I decided to deal with some residual effects of the Crash of '11. Part of the damage they did was that in just one of my computers (my primary, of course) all the documents in the Tragic Briefcase were duplicated in the Windows My Documents folder. I finally started deleting them. But guess what. Though they appeared to be duplicated in the file system, removing them from My Documents on the primary PC caused them to disappear from the Tragic Briefcase too. They efficiently disappeared from all the other computers' Tragic Briefcases as well. So now, 21 gigs of files are gone, and of course I don't know which ones. I want to avoid this in the future. Apart from using a different storage provider, the bigger picture is this: how do I back up my cloud data? A complete backup every week or so from web to local storage would cause me to exceed my ISP's bandwidth. Do I need to back up each of my 12 PCs locally? I do use Backupify for my primary Google Docs, but I have been storing taxes, confidential documents, Photoshop source, video source files, and so on using the web service. So it's a lot of data, but I need to keep it safe. Backup locally would also mean 2 backup drives or some kind of RAID per PC, right, because you can't trust a single point of failure? Assuming I move to DropBox or something of its ilk, what is the best way to make sure that if the next cloud storage provider messes up I can restore?

    Read the article

  • Need help troubleshooting highly variable ping times

    - by Elliot.Bradshaw
    I'm at work using Citrix (think Remote Desktop) to connect to client sites. With my job I have to write a fair bit of code while I'm connected remotely via Citrix, so the latency of my internet connection is important. If I'm getting ping times above 250ms, then it becomes almost impossible to scroll, click or type with accuracy. Recently my Comcast business internet has been exhibiting highly variable ping times. If I ping google.com, I'll get pings that range from 9ms all the way up to 1300ms. The problem seems to be at its worst during the hours of 1PM to 4:30PM. Outside of those hours and the variance in pings settles down, mostly between 9ms and 50ms. The signal to noise ratio and upstream power are both fine on my modem--the values are here: http://pastebin.com/D4hWGPXf I ran a trace route from my computer to google.com (the results of which are here: http://pastebin.com/GcdjYvMh) and did another test ping to the IP of the first hop outside of our local network (73.98.44.1)--the variance in ping times existed in exactly the same manner as if I were pinging Google. Connecting directly to the cable modem by CAT5 makes no difference. Here is a screenshot demonstrating the variance of the ping times: http://postimage.org/image/haocdeauv/full/ -- as you can see it can get pretty bad. Three Comcast techs have been out (two of them were here when the problem wasn't happening) and they as well as the regional tier 2 Comcast support were unable to diagnose the problem. I now have a ticket open with tier 3 support, but have yet to hear back from them. Does anyone know what could cause these sorts of problems or have any idea from the traceroute above where it could be originating? The regional tier 2 guy tried to tell me that what I'm seeing is normal--are highly variable ping times like that ever acceptable? Anything I should ask Comcast to do or look at to get this problem fixed? Any tips/advice much appreciated! Edit: This is Comcast cable internet at a small start-up, we've ruled out congestion in our private LAN as a cause (i.e., no one's watching YouTube when the pings become variable). Update: Tier 3 Comcast support advised swapping out the modem, a tech came here today and did that--same problem persists.

    Read the article

  • What server setup for a small web development company? [closed]

    - by Giordano
    I co-own a company with a friend of mine and we have decided to buy a new server to support our business (our current server is an Asus EEE Box, working great but too limited :) ). I should mention that we are web developers but occasionally we do small-office sys admin. Thus, 99% of time we work on GNU/Linux (mainly Ubuntu) but from time to time we need to setup a Windows environment to assist some customers (e.g. setup a temporary SQL Server 2008). Our requirements: Low budget: we don't want the cheapest solution out there but we can't afford to spend too much. Budget could be ~1000-1500€ (before VAT) Robustness: we would like to setup a RAID array and maybe have an external disk where we can store backups Virtualization: we need to be able to setup few servers for development. The scenario is something like this (~8 appliances running in parallel): Redmine + GIT server Bacula server FTP server 3-4 virtual appliances that could be set up on demand to test our applications or support a customer. The appliances could be: LAMP, Tomcat+PostgreSQL, SQL Server Support: if something breaks down it shouldn't be too difficult to find a replacement. Now, given the main requirements, there are some doubts we need to clarify: Do you suggest to buy a prepackaged solution (for example a customized Dell PowerEdge T110 or T310) or to assemble the server by ourselves (buy the separate components)? What RAID configuration do you suggest? I was thinking of RAID1 (probably cheaper) or RAID5. should we buy a hardware RAID controller or is it ok to use a software RAID (mdadm)? In case, which controller do you suggest? What processor do you suggest (Intel Xeon, i3, i5, i7, AMD)? How much RAM? (I was thinking at least 8GB, ~1GB per appliance) What virtualization software do you recommend? VMWare seems to be the best choice, but what about XEN or KVM? We don't want to buy licenses at the moment so we would like to consider only free options. What OS do you recommend? We know Ubuntu, Debian, Gentoo very well (we would like to use Ubuntu Server), however it seems a lot of people goes for CentOS. Thanks in advance if you can help us with this! It's our first "serious" server so many doubts popped up :) Please feel free to add further recommendations if you have some to share ;) Have a nice day

    Read the article

  • ASp.Net Mvc 1.0 Dynamic Images Returned from Controller taking 154 seconds+ to display in IE8, firef

    - by julian guppy
    I have a curious problem with IE, IIS 6.0 dynamic PNG files and I am baffled as to how to fix.. Snippet from Helper (this returns the URL to the view for requesting the images from my Controller. string url = LinkBuilder.BuildUrlFromExpression(helper.ViewContext.RequestContext, helper.RouteCollection, c = c.FixHeight(ir.Filename, ir.AltText, "FFFFFF")); url = url.Replace("&", "&"); sb.Append(string.Format("<removed id=\"TheImage\" src=\"{0}\" alt=\"\" /", url)+Environment.NewLine); This produces a piece of html as follows:- img id="TheImage" src="/ImgText/FixHeight?sFile=Images%2FUser%2FJulianGuppy%2FMediums%2Fconservatory.jpg&backgroundColour=FFFFFF" alt="" / brackets missing because i cant post an image... even though I dont want to post an image I jsut want to post the markup... sigh Snippet from Controller ImgTextController /// <summary> /// This function fixes the height of the image /// </summary> /// <param name="sFile"></param> /// <param name="alternateText"></param> /// <param name="backgroundColour"></param> /// <returns></returns> [AcceptVerbs(HttpVerbs.Get)] public ActionResult FixHeight(string sFile, string alternateText, string backgroundColour) { #region File if (string.IsNullOrEmpty(sFile)) { return new ImgTextResult(); } // MVC specific change to prepend the new directory if (sFile.IndexOf("Content") == -1) { sFile = "~/Content/" + sFile; } // open the file System.Drawing.Image img; try { img = System.Drawing.Image.FromFile(Server.MapPath(sFile)); } catch { img = null; } // did we fail? if (img == null) { return new ImgTextResult(); } #endregion File #region Width // Sort out the width from the image passed to me Int32 nWidth = img.Width; #endregion Width #region Height Int32 nHeight = img.Height; #endregion Height // What is the ideal height given a width of 2100 this should be 1400. var nIdealHeight = (int)(nWidth / 1.40920096852); // So is the actual height of the image already greater than the ideal height? Int32 nSplit; if (nIdealHeight < nHeight) { // Yes, do nothing, well i need to return the iamge... nSplit = 0; } else { // rob wants to not show the white at the top or bottom, so if we were to crop the image how would be do it // 1. Calculate what the width should be If we dont adjust the heigt var newIdealWidth = (int)(nHeight * 1.40920096852); // 2. This newIdealWidth should be smaller than the existing width... so work out the split on that Int32 newSplit = (nWidth - newIdealWidth) / 2; // 3. Now recrop the image using 0-nHeight as the height (i.e. full height) // but crop the sides so that its the correct aspect ration var newRect = new Rectangle(newSplit, 0, newIdealWidth, nHeight); img = CropImage(img, newRect); nHeight = img.Height; nWidth = img.Width; nSplit = 0; } // No, so I want to place this image on a larger canvas and we do this by Creating a new image to be the size that we want System.Drawing.Image canvas = new Bitmap(nWidth, nIdealHeight, PixelFormat.Format24bppRgb); Graphics g = Graphics.FromImage(canvas); #region Color // Whilst we can set the background colour we shall default to white if (string.IsNullOrEmpty(backgroundColour)) { backgroundColour = "FFFFFF"; } Color bc = ColorTranslator.FromHtml("#" + backgroundColour); #endregion Color // Filling the background (which gives us our broder) Brush backgroundBrush = new SolidBrush(bc); g.FillRectangle(backgroundBrush, -1, -1, nWidth + 1, nIdealHeight + 1); // draw the image at the position var rect = new Rectangle(0, nSplit, nWidth, nHeight); g.DrawImage(img, rect); return new ImgTextResult { Image = canvas, ImageFormat = ImageFormat.Png }; } My ImgTextResult is a class that returns an Action result for me but embedding the image from a memory stream into the response.outputstream. snippet from my ImageResults /// <summary> /// Execute the result /// </summary> /// <param name="context"></param> public override void ExecuteResult(ControllerContext context) { // output context.HttpContext.Response.Clear(); context.HttpContext.Response.ContentType = "image/png"; try { var memStream = new MemoryStream(); Image.Save(memStream, ImageFormat.Png); context.HttpContext.Response.BinaryWrite(memStream.ToArray()); context.HttpContext.Response.Flush(); context.HttpContext.Response.Close(); memStream.Dispose(); Image.Dispose(); } catch (Exception ex) { string a = ex.Message; } } Now all of this works locally and lovely, and indeed all of this works on my production server BUT Only for Firefox, Safari, Chrome (and other browsers) IE has a fit and decides that it either wont display the image or it does display the image after approx 154seconds of waiting..... I have made sure my HTML is XHTML compliant, I have made sure I am getting no Routing errors or crashes in my event log on the server.... Now obviously I have been a muppet and have done something wrong... but what I cant fathom is why in development all works fine, and in production all non IE browsers also work fine, but IE 8 using IIS 6.0 production server is having some kind of problem in returning this PNG and I dont have an error to trace... so what I am looking for is guidance as to how I can debug this problem.

    Read the article

  • Issue in configuring JPA with Spring 3 in Jboss 4.2.2 server.

    - by KVMKReddy
    Hi, I am facing issues in configuring JPA with Spring 3 in JBoss 4.2.2 server. Please find the below file of persistence.xml. <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="TestPU"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source>java:/TestDS</jta-data-source> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect"/> <property name="hibernate.show_sql" value="true"/> </properties> </persistence-unit> </persistence> My spring-beans.xml is as below <bean id="MyAdvise" class=".......Aspect"> <property name="persister"> <bean id="dbPersister" class="..............DataBasePersister"> </bean> </property> </bean> <bean id="localContainerEntityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true"/> <property name="database" value="ORACLE"/> </bean> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.manager_lookup_class">org.hibernate.transaction.JBossTransactionManagerLookup</prop> </props> </property> </bean> <bean id="myTxManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="localContainerEntityManagerFactory"/> </bean> <tx:annotation-driven transaction-manager="myTxManager" /> My persister bean is as follows. public class DataBasePersister implements IPersister { private static Logger log = Logger.getLogger(DataBasePersister.class); // The Entity Manager @PersistenceContext protected EntityManager entityManager; @Transactional(readOnly = false) public void persist(Object data) { log.info("IN persist() call. Is the data can castable to MethodStats -->:"+(data instanceof MethodStats)); log.info("Entity Manager instance -->:"+(entityManager)); ---------------------- ---------------------- ---------------------- } } I am getting the following exception when the spring container creating my persister bean org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is java.lang.IllegalStateException: JTA EntityManager cannot access a transactions at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:382) at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:371) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:309) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:166) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202) at $Proxy147.getTransaction(Unknown Source) at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:335) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:105) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202) at $Proxy141.persist(Unknown Source) at com.adp.sbs.aop.aspectj.SBSMethodStatsCollectorAspect.doAround(SBSMethodStatsCollectorAspect.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:621) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:610) at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:65) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor.intercept(Cglib2AopProxy.java:621) at com.adp.sbs.aop.test.TestMethodLevelAnnotationStats$$EnhancerByCGLIB$$efbc78a8.MethodWithOneParamsAndReturnTypeAsString(<generated>) at com.adp.sbs.aop.test.SimpleTestServlet.testMethodAnnotations(SimpleTestServlet.java:46) at com.adp.sbs.aop.test.SimpleTestServlet.doPost(SimpleTestServlet.java:40) at com.adp.sbs.aop.test.SimpleTestServlet.doGet(SimpleTestServlet.java:33) at javax.servlet.http.HttpServlet.service(HttpServlet.java:690) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:179) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:84) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:157) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:262) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:446) at java.lang.Thread.run(Thread.java:595) Caused by: java.lang.IllegalStateException: JTA EntityManager cannot access a transactions at org.hibernate.ejb.AbstractEntityManagerImpl.getTransaction(AbstractEntityManagerImpl.java:316) at org.springframework.orm.jpa.DefaultJpaDialect.beginTransaction(DefaultJpaDialect.java:70) at org.springframework.orm.jpa.vendor.HibernateJpaDialect.beginTransaction(HibernateJpaDialect.java:57) at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:332) Can you somebody please suggest me how to resolve this.

    Read the article

  • template class: ctor against function -> new C++ standard

    - by Oops
    Hi in this question: http://stackoverflow.com/questions/2779155/template-point2-double-point3-double Dennis and Michael noticed the unreasonable foolishly implemented constructor. They were right, I didn't consider this at that moment. But I found out that a constructor does not help very much for a template class like this one, instead a function is here much more convenient and safe namespace point { template < unsigned int dims, typename T > struct Point { T X[ dims ]; std::string str() { std::stringstream s; s << "{"; for ( int i = 0; i < dims; ++i ) { s << " X" << i << ": " << X[ i ] << (( i < dims -1 )? " |": " "); } s << "}"; return s.str(); } Point<dims, int> toint() { Point<dims, int> ret; std::copy( X, X+dims, ret.X ); return ret; } }; template < typename T > Point< 2, T > Create( T X0, T X1 ) { Point< 2, T > ret; ret.X[ 0 ] = X0; ret.X[ 1 ] = X1; return ret; } template < typename T > Point< 3, T > Create( T X0, T X1, T X2 ) { Point< 3, T > ret; ret.X[ 0 ] = X0; ret.X[ 1 ] = X1; ret.X[ 2 ] = X2; return ret; } template < typename T > Point< 4, T > Create( T X0, T X1, T X2, T X3 ) { Point< 4, T > ret; ret.X[ 0 ] = X0; ret.X[ 1 ] = X1; ret.X[ 2 ] = X2; ret.X[ 3 ] = X3; return ret; } }; int main( void ) { using namespace point; Point< 2, double > p2d = point::Create( 12.3, 34.5 ); Point< 3, double > p3d = point::Create( 12.3, 34.5, 56.7 ); Point< 4, double > p4d = point::Create( 12.3, 34.5, 56.7, 78.9 ); //Point< 3, double > p1d = point::Create( 12.3, 34.5 ); //no suitable user defined conversion exists //Point< 3, int > p1i = p4d.toint(); //no suitable user defined conversion exists Point< 2, int > p2i = p2d.toint(); Point< 3, int > p3i = p3d.toint(); Point< 4, int > p4i = p4d.toint(); std::cout << p2d.str() << std::endl; std::cout << p3d.str() << std::endl; std::cout << p4d.str() << std::endl; std::cout << p2i.str() << std::endl; std::cout << p3i.str() << std::endl; std::cout << p4i.str() << std::endl; char c; std::cin >> c; } has the new C++ standard any new improvements, language features or simplifications regarding this aspect of ctor of a template class? what do you think about the implementation of the combination of namespace, stuct and Create function? many thanks in advance Oops

    Read the article

  • Creating a dynamic, extensible C# Expando Object

    - by Rick Strahl
    I love dynamic functionality in a strongly typed language because it offers us the best of both worlds. In C# (or any of the main .NET languages) we now have the dynamic type that provides a host of dynamic features for the static C# language. One place where I've found dynamic to be incredibly useful is in building extensible types or types that expose traditionally non-object data (like dictionaries) in easier to use and more readable syntax. I wrote about a couple of these for accessing old school ADO.NET DataRows and DataReaders more easily for example. These classes are dynamic wrappers that provide easier syntax and auto-type conversions which greatly simplifies code clutter and increases clarity in existing code. ExpandoObject in .NET 4.0 Another great use case for dynamic objects is the ability to create extensible objects - objects that start out with a set of static members and then can add additional properties and even methods dynamically. The .NET 4.0 framework actually includes an ExpandoObject class which provides a very dynamic object that allows you to add properties and methods on the fly and then access them again. For example with ExpandoObject you can do stuff like this:dynamic expand = new ExpandoObject(); expand.Name = "Rick"; expand.HelloWorld = (Func<string, string>) ((string name) => { return "Hello " + name; }); Console.WriteLine(expand.Name); Console.WriteLine(expand.HelloWorld("Dufus")); Internally ExpandoObject uses a Dictionary like structure and interface to store properties and methods and then allows you to add and access properties and methods easily. As cool as ExpandoObject is it has a few shortcomings too: It's a sealed type so you can't use it as a base class It only works off 'properties' in the internal Dictionary - you can't expose existing type data It doesn't serialize to XML or with DataContractSerializer/DataContractJsonSerializer Expando - A truly extensible Object ExpandoObject is nice if you just need a dynamic container for a dictionary like structure. However, if you want to build an extensible object that starts out with a set of strongly typed properties and then allows you to extend it, ExpandoObject does not work because it's a sealed class that can't be inherited. I started thinking about this very scenario for one of my applications I'm building for a customer. In this system we are connecting to various different user stores. Each user store has the same basic requirements for username, password, name etc. But then each store also has a number of extended properties that is available to each application. In the real world scenario the data is loaded from the database in a data reader and the known properties are assigned from the known fields in the database. All unknown fields are then 'added' to the expando object dynamically. In the past I've done this very thing with a separate property - Properties - just like I do for this class. But the property and dictionary syntax is not ideal and tedious to work with. I started thinking about how to represent these extra property structures. One way certainly would be to add a Dictionary, or an ExpandoObject to hold all those extra properties. But wouldn't it be nice if the application could actually extend an existing object that looks something like this as you can with the Expando object:public class User : Westwind.Utilities.Dynamic.Expando { public string Email { get; set; } public string Password { get; set; } public string Name { get; set; } public bool Active { get; set; } public DateTime? ExpiresOn { get; set; } } and then simply start extending the properties of this object dynamically? Using the Expando object I describe later you can now do the following:[TestMethod] public void UserExampleTest() { var user = new User(); // Set strongly typed properties user.Email = "[email protected]"; user.Password = "nonya123"; user.Name = "Rickochet"; user.Active = true; // Now add dynamic properties dynamic duser = user; duser.Entered = DateTime.Now; duser.Accesses = 1; // you can also add dynamic props via indexer user["NickName"] = "AntiSocialX"; duser["WebSite"] = "http://www.west-wind.com/weblog"; // Access strong type through dynamic ref Assert.AreEqual(user.Name,duser.Name); // Access strong type through indexer Assert.AreEqual(user.Password,user["Password"]); // access dyanmically added value through indexer Assert.AreEqual(duser.Entered,user["Entered"]); // access index added value through dynamic Assert.AreEqual(user["NickName"],duser.NickName); // loop through all properties dynamic AND strong type properties (true) foreach (var prop in user.GetProperties(true)) { object val = prop.Value; if (val == null) val = "null"; Console.WriteLine(prop.Key + ": " + val.ToString()); } } As you can see this code somewhat blurs the line between a static and dynamic type. You start with a strongly typed object that has a fixed set of properties. You can then cast the object to dynamic (as I discussed in my last post) and add additional properties to the object. You can also use an indexer to add dynamic properties to the object. To access the strongly typed properties you can use either the strongly typed instance, the indexer or the dynamic cast of the object. Personally I think it's kinda cool to have an easy way to access strongly typed properties by string which can make some data scenarios much easier. To access the 'dynamically added' properties you can use either the indexer on the strongly typed object, or property syntax on the dynamic cast. Using the dynamic type allows all three modes to work on both strongly typed and dynamic properties. Finally you can iterate over all properties, both dynamic and strongly typed if you chose. Lots of flexibility. Note also that by default the Expando object works against the (this) instance meaning it extends the current object. You can also pass in a separate instance to the constructor in which case that object will be used to iterate over to find properties rather than this. Using this approach provides some really interesting functionality when use the dynamic type. To use this we have to add an explicit constructor to the Expando subclass:public class User : Westwind.Utilities.Dynamic.Expando { public string Email { get; set; } public string Password { get; set; } public string Name { get; set; } public bool Active { get; set; } public DateTime? ExpiresOn { get; set; } public User() : base() { } // only required if you want to mix in seperate instance public User(object instance) : base(instance) { } } to allow the instance to be passed. When you do you can now do:[TestMethod] public void ExpandoMixinTest() { // have Expando work on Addresses var user = new User( new Address() ); // cast to dynamicAccessToPropertyTest dynamic duser = user; // Set strongly typed properties duser.Email = "[email protected]"; user.Password = "nonya123"; // Set properties on address object duser.Address = "32 Kaiea"; //duser.Phone = "808-123-2131"; // set dynamic properties duser.NonExistantProperty = "This works too"; // shows default value Address.Phone value Console.WriteLine(duser.Phone); } Using the dynamic cast in this case allows you to access *three* different 'objects': The strong type properties, the dynamically added properties in the dictionary and the properties of the instance passed in! Effectively this gives you a way to simulate multiple inheritance (which is scary - so be very careful with this, but you can do it). How Expando works Behind the scenes Expando is a DynamicObject subclass as I discussed in my last post. By implementing a few of DynamicObject's methods you can basically create a type that can trap 'property missing' and 'method missing' operations. When you access a non-existant property a known method is fired that our code can intercept and provide a value for. Internally Expando uses a custom dictionary implementation to hold the dynamic properties you might add to your expandable object. Let's look at code first. The code for the Expando type is straight forward and given what it provides relatively short. Here it is.using System; using System.Collections.Generic; using System.Linq; using System.Dynamic; using System.Reflection; namespace Westwind.Utilities.Dynamic { /// <summary> /// Class that provides extensible properties and methods. This /// dynamic object stores 'extra' properties in a dictionary or /// checks the actual properties of the instance. /// /// This means you can subclass this expando and retrieve either /// native properties or properties from values in the dictionary. /// /// This type allows you three ways to access its properties: /// /// Directly: any explicitly declared properties are accessible /// Dynamic: dynamic cast allows access to dictionary and native properties/methods /// Dictionary: Any of the extended properties are accessible via IDictionary interface /// </summary> [Serializable] public class Expando : DynamicObject, IDynamicMetaObjectProvider { /// <summary> /// Instance of object passed in /// </summary> object Instance; /// <summary> /// Cached type of the instance /// </summary> Type InstanceType; PropertyInfo[] InstancePropertyInfo { get { if (_InstancePropertyInfo == null && Instance != null) _InstancePropertyInfo = Instance.GetType().GetProperties(BindingFlags.Instance | BindingFlags.Public | BindingFlags.DeclaredOnly); return _InstancePropertyInfo; } } PropertyInfo[] _InstancePropertyInfo; /// <summary> /// String Dictionary that contains the extra dynamic values /// stored on this object/instance /// </summary> /// <remarks>Using PropertyBag to support XML Serialization of the dictionary</remarks> public PropertyBag Properties = new PropertyBag(); //public Dictionary<string,object> Properties = new Dictionary<string, object>(); /// <summary> /// This constructor just works off the internal dictionary and any /// public properties of this object. /// /// Note you can subclass Expando. /// </summary> public Expando() { Initialize(this); } /// <summary> /// Allows passing in an existing instance variable to 'extend'. /// </summary> /// <remarks> /// You can pass in null here if you don't want to /// check native properties and only check the Dictionary! /// </remarks> /// <param name="instance"></param> public Expando(object instance) { Initialize(instance); } protected virtual void Initialize(object instance) { Instance = instance; if (instance != null) InstanceType = instance.GetType(); } /// <summary> /// Try to retrieve a member by name first from instance properties /// followed by the collection entries. /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; // first check the Properties collection for member if (Properties.Keys.Contains(binder.Name)) { result = Properties[binder.Name]; return true; } // Next check for Public properties via Reflection if (Instance != null) { try { return GetProperty(Instance, binder.Name, out result); } catch { } } // failed to retrieve a property result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { // first check to see if there's a native property to set if (Instance != null) { try { bool result = SetProperty(Instance, binder.Name, value); if (result) return true; } catch { } } // no match - set or add to dictionary Properties[binder.Name] = value; return true; } /// <summary> /// Dynamic invocation method. Currently allows only for Reflection based /// operation (no ability to add methods dynamically). /// </summary> /// <param name="binder"></param> /// <param name="args"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { if (Instance != null) { try { // check instance passed in for methods to invoke if (InvokeMethod(Instance, binder.Name, args, out result)) return true; } catch { } } result = null; return false; } /// <summary> /// Reflection Helper method to retrieve a property /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="result"></param> /// <returns></returns> protected bool GetProperty(object instance, string name, out object result) { if (instance == null) instance = this; var miArray = InstanceType.GetMember(name, BindingFlags.Public | BindingFlags.GetProperty | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0]; if (mi.MemberType == MemberTypes.Property) { result = ((PropertyInfo)mi).GetValue(instance,null); return true; } } result = null; return false; } /// <summary> /// Reflection helper method to set a property value /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="value"></param> /// <returns></returns> protected bool SetProperty(object instance, string name, object value) { if (instance == null) instance = this; var miArray = InstanceType.GetMember(name, BindingFlags.Public | BindingFlags.SetProperty | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0]; if (mi.MemberType == MemberTypes.Property) { ((PropertyInfo)mi).SetValue(Instance, value, null); return true; } } return false; } /// <summary> /// Reflection helper method to invoke a method /// </summary> /// <param name="instance"></param> /// <param name="name"></param> /// <param name="args"></param> /// <param name="result"></param> /// <returns></returns> protected bool InvokeMethod(object instance, string name, object[] args, out object result) { if (instance == null) instance = this; // Look at the instanceType var miArray = InstanceType.GetMember(name, BindingFlags.InvokeMethod | BindingFlags.Public | BindingFlags.Instance); if (miArray != null && miArray.Length > 0) { var mi = miArray[0] as MethodInfo; result = mi.Invoke(Instance, args); return true; } result = null; return false; } /// <summary> /// Convenience method that provides a string Indexer /// to the Properties collection AND the strongly typed /// properties of the object by name. /// /// // dynamic /// exp["Address"] = "112 nowhere lane"; /// // strong /// var name = exp["StronglyTypedProperty"] as string; /// </summary> /// <remarks> /// The getter checks the Properties dictionary first /// then looks in PropertyInfo for properties. /// The setter checks the instance properties before /// checking the Properties dictionary. /// </remarks> /// <param name="key"></param> /// /// <returns></returns> public object this[string key] { get { try { // try to get from properties collection first return Properties[key]; } catch (KeyNotFoundException ex) { // try reflection on instanceType object result = null; if (GetProperty(Instance, key, out result)) return result; // nope doesn't exist throw; } } set { if (Properties.ContainsKey(key)) { Properties[key] = value; return; } // check instance for existance of type first var miArray = InstanceType.GetMember(key, BindingFlags.Public | BindingFlags.GetProperty); if (miArray != null && miArray.Length > 0) SetProperty(Instance, key, value); else Properties[key] = value; } } /// <summary> /// Returns and the properties of /// </summary> /// <param name="includeProperties"></param> /// <returns></returns> public IEnumerable<KeyValuePair<string,object>> GetProperties(bool includeInstanceProperties = false) { if (includeInstanceProperties && Instance != null) { foreach (var prop in this.InstancePropertyInfo) yield return new KeyValuePair<string, object>(prop.Name, prop.GetValue(Instance, null)); } foreach (var key in this.Properties.Keys) yield return new KeyValuePair<string, object>(key, this.Properties[key]); } /// <summary> /// Checks whether a property exists in the Property collection /// or as a property on the instance /// </summary> /// <param name="item"></param> /// <returns></returns> public bool Contains(KeyValuePair<string, object> item, bool includeInstanceProperties = false) { bool res = Properties.ContainsKey(item.Key); if (res) return true; if (includeInstanceProperties && Instance != null) { foreach (var prop in this.InstancePropertyInfo) { if (prop.Name == item.Key) return true; } } return false; } } } Although the Expando class supports an indexer, it doesn't actually implement IDictionary or even IEnumerable. It only provides the indexer and Contains() and GetProperties() methods, that work against the Properties dictionary AND the internal instance. The reason for not implementing IDictionary is that a) it doesn't add much value since you can access the Properties dictionary directly and that b) I wanted to keep the interface to class very lean so that it can serve as an entity type if desired. Implementing these IDictionary (or even IEnumerable) causes LINQ extension methods to pop up on the type which obscures the property interface and would only confuse the purpose of the type. IDictionary and IEnumerable are also problematic for XML and JSON Serialization - the XML Serializer doesn't serialize IDictionary<string,object>, nor does the DataContractSerializer. The JavaScriptSerializer does serialize, but it treats the entire object like a dictionary and doesn't serialize the strongly typed properties of the type, only the dictionary values which is also not desirable. Hence the decision to stick with only implementing the indexer to support the user["CustomProperty"] functionality and leaving iteration functions to the publicly exposed Properties dictionary. Note that the Dictionary used here is a custom PropertyBag class I created to allow for serialization to work. One important aspect for my apps is that whatever custom properties get added they have to be accessible to AJAX clients since the particular app I'm working on is a SIngle Page Web app where most of the Web access is through JSON AJAX calls. PropertyBag can serialize to XML and one way serialize to JSON using the JavaScript serializer (not the DCS serializers though). The key components that make Expando work in this code are the Properties Dictionary and the TryGetMember() and TrySetMember() methods. The Properties collection is public so if you choose you can explicitly access the collection to get better performance or to manipulate the members in internal code (like loading up dynamic values form a database). Notice that TryGetMember() and TrySetMember() both work against the dictionary AND the internal instance to retrieve and set properties. This means that user["Name"] works against native properties of the object as does user["Name"] = "RogaDugDog". What's your Use Case? This is still an early prototype but I've plugged it into one of my customer's applications and so far it's working very well. The key features for me were the ability to easily extend the type with values coming from a database and exposing those values in a nice and easy to use manner. I'm also finding that using this type of object for ViewModels works very well to add custom properties to view models. I suspect there will be lots of uses for this - I've been using the extra dictionary approach to extensibility for years - using a dynamic type to make the syntax cleaner is just a bonus here. What can you think of to use this for? Resources Source Code and Tests (GitHub) Also integrated in Westwind.Utilities of the West Wind Web Toolkit West Wind Utilities NuGet© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET  Dynamic Types   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Where is my app.config for SSIS?

    Sometimes when working with SSIS you need to add or change settings in the .NET application configuration file, which can be a bit confusing when you are building a SSIS package not an application. First of all lets review a couple of examples where you may need to do this. You are using referencing an assembly in a Script Task that uses Enterprise Library (aka EntLib), so you need to add the relevant configuration sections and settings, perhaps for the logging application block. You are using using Enterprise Library in a custom task or component, and again you need to add the relevant configuration sections and settings. You are using a web service with Microsoft Web Services Enhancements (WSE) 3.0 and hosting the proxy in SSIS, in an assembly used by your package, and need to add the configuration sections and settings. You need to change behaviours of the .NET framework which can be influenced by a configuration file, such as the System.Net.Mail default SMTP settings. Perhaps you wish to configure System.Net and the httpWebRequest header for parsing unsafe header (useUnsafeHeaderParsing), which will change the way the HTTP Connection manager behaves. You are consuming a WCF service and wish to specify the endpoint in configuration. There are no doubt plenty more examples but each of these requires us to identify the correct configuration file and and make the relevant changes. There are actually several configuration files, each used by a different execution host depending on how you are working with the SSIS package. The folders we need to look in will actually vary depending on the version of SQL Server as well as the processor architecture, but most are all what we can call the Binn folder. The SQL Server 2005 Binn folder is at C:\Program Files\Microsoft SQL Server\90\DTS\Binn\, compared to C:\Program Files\Microsoft SQL Server\100\DTS\Binn\ for SQL Server 2008. If you are on a 64-bit machine then you will see C:\Program Files (x86)\Microsoft SQL Server\90\DTS\Binn\ for the 32-bit executables and C:\Program Files\Microsoft SQL Server\90\DTS\Binn\ for 64-bit, so be sure to check all relevant locations. Of course SQL Server 2008 may have a C:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn\ on a 64-bit machine too. To recap, the version of SQL Server determines if you look in the 90 or 100 sub-folder under SQL Server in Program Files (C:\Program Files\Microsoft SQL Server\nn\) . If you are running a 64-bit operating system then you will have two instances program files, C:\Program Files (x86)\ for 32-bit and  C:\Program Files\ for 64-bit. You may wish to check both depending on what you are doing, but this is covered more under each section below. There are a total of five specific configuration files that you may need to change, each one is detailed below: DTExec.exe.config DTExec.exe is the standalone command line tool used for executing SSIS packages, and therefore it is an execution host with an app.config file. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DTExec.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. DtsDebugHost.exe.config DtsDebugHost.exe is the execution host used by Business Intelligence Development Studio (BIDS) / Visual Studio when executing a package from the designer in debug mode, which is the default behaviour. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DtsDebugHost.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. This may surprise some people as Visual Studio is only 32-bit, but thankfully the debugger supports both. This can be set in the project properties, see the Run64BitRuntime property (true or false) in the Debugging pane of the Project Properties. dtshost.exe.config dtshost.exe is the execution host used by what I think of as the built-in features of SQL Server such as SQL Server Agent e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\dtshost.exe.config This file can be found in both the 32-bit and 64-bit Binn folders devenv.exe.config Something slightly different is devenv.exe which is Visual Studio. This configuration file may also need changing if you need a feature at design-time such as in a Task Editor or Connection Manager editor. Visual Studio 2005 for SQL Server 2005  - C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\devenv.exe.config Visual Studio 2008 for SQL Server 2008  - C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe.config Visual Studio is only available for 32-bit so on a 64-bit machine you will have to look in C:\Program Files (x86)\ only. DTExecUI.exe.config The DTExec UI tool can also have a configuration file and these cab be found under the Tools folders for SQL Sever as shown below. C:\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe A configuration file may not exist, but if you can find the matching executable you know you are in the right place so can go ahead and add a new file yourself. In summary we have covered the assembly configuration files for all of the standard methods of building and running a SSIS package, but obviously if you are working programmatically you will need to make the relevant modifications to your program’s app.config as well.

    Read the article

  • WCF WS-Security and WSE Nonce Authentication

    - by Rick Strahl
    WCF makes it fairly easy to access WS-* Web Services, except when you run into a service format that it doesn't support. Even then WCF provides a huge amount of flexibility to make the service clients work, however finding the proper interfaces to make that happen is not easy to discover and for the most part undocumented unless you're lucky enough to run into a blog, forum or StackOverflow post on the matter. This is definitely true for the Password Nonce as part of the WS-Security/WSE protocol, which is not natively supported in WCF. Specifically I had a need to create a WCF message on the client that includes a WS-Security header that looks like this from their spec document:<soapenv:Header> <wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken wsu:Id="UsernameToken-8" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsse:Username>TeStUsErNaMe1</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >TeStPaSsWoRd1</wsse:Password> <wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >f8nUe3YupTU5ISdCy3X9Gg==</wsse:Nonce> <wsu:Created>2011-05-04T19:01:40.981Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soapenv:Header> Specifically, the Nonce and Created keys are what WCF doesn't create or have a built in formatting for. Why is there a nonce? My first thought here was WTF? The username and password are there in clear text, what does the Nonce accomplish? The Nonce and created keys are are part of WSE Security specification and are meant to allow the server to detect and prevent replay attacks. The hashed nonce should be unique per request which the server can store and check for before running another request thus ensuring that a request is not replayed with exactly the same values. Basic ServiceUtl Import - not much Luck The first thing I did when I imported this service with a service reference was to simply import it as a Service Reference. The Add Service Reference import automatically detects that WS-Security is required and appropariately adds the WS-Security to the basicHttpBinding in the config file:<?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="RealTimeOnlineSoapBinding"> <security mode="Transport" /> </binding> <binding name="RealTimeOnlineSoapBinding1" /> </basicHttpBinding> </bindings> <client> <endpoint address="https://notarealurl.com:443/services/RealTimeOnline" binding="basicHttpBinding" bindingConfiguration="RealTimeOnlineSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> </configuration> If if I run this as is using code like this:var client = new RealTimeOnlineClient(); client.ClientCredentials.UserName.UserName = "TheUsername"; client.ClientCredentials.UserName.Password = "ThePassword"; … I get nothing in terms of WS-Security headers. The request is sent, but the the binding expects transport level security to be applied, rather than message level security. To fix this so that a WS-Security message header is sent the security mode can be changed to: <security mode="TransportWithMessageCredential" /> Now if I re-run I at least get a WS-Security header which looks like this:<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <u:Timestamp u:Id="_0"> <u:Created>2012-11-24T02:55:18.011Z</u:Created> <u:Expires>2012-11-24T03:00:18.011Z</u:Expires> </u:Timestamp> <o:UsernameToken u:Id="uuid-18c215d4-1106-40a5-8dd1-c81fdddf19d3-1"> <o:Username>TheUserName</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> Closer! Now the WS-Security header is there along with a timestamp field (which might not be accepted by some WS-Security expecting services), but there's no Nonce or created timestamp as required by my original service. Using a CustomBinding instead My next try was to go with a CustomBinding instead of basicHttpBinding as it allows a bit more control over the protocol and transport configurations for the binding. Specifically I can explicitly specify the message protocol(s) used. Using configuration file settings here's what the config file looks like:<?xml version="1.0"?> <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="CustomSoapBinding"> <security includeTimestamp="false" authenticationMode="UserNameOverTransport" defaultAlgorithmSuite="Basic256" requireDerivedKeys="false" messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10"> </security> <textMessageEncoding messageVersion="Soap11"></textMessageEncoding> <httpsTransport maxReceivedMessageSize="2000000000"/> </binding> </customBinding> </bindings> <client> <endpoint address="https://notrealurl.com:443/services/RealTimeOnline" binding="customBinding" bindingConfiguration="CustomSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> </startup> </configuration> This ends up creating a cleaner header that's missing the timestamp field which can cause some services problems. The WS-Security header output generated with the above looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-291622ca-4c11-460f-9886-ac1c78813b24-1"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> This is closer as it includes only the username and password. The key here is the protocol for WS-Security:messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10" which explicitly specifies the protocol version. There are several variants of this specification but none of them seem to support the nonce unfortunately. This protocol does allow for optional omission of the Nonce and created timestamp provided (which effectively makes those keys optional). With some services I tried that requested a Nonce just using this protocol actually worked where the default basicHttpBinding failed to connect, so this is a possible solution for access to some services. Unfortunately for my target service that was not an option. The nonce has to be there. Creating Custom ClientCredentials As it turns out WCF doesn't have support for the Digest Nonce as part of WS-Security, and so as far as I can tell there's no way to do it just with configuration settings. I did a bunch of research on this trying to find workarounds for this, and I did find a couple of entries on StackOverflow as well as on the MSDN forums. However, none of these are particularily clear and I ended up using bits and pieces of several of them to arrive at a working solution in the end. http://stackoverflow.com/questions/896901/wcf-adding-nonce-to-usernametoken http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4df3354f-0627-42d9-b5fb-6e880b60f8ee The latter forum message is the more useful of the two (the last message on the thread in particular) and it has most of the information required to make this work. But it took some experimentation for me to get this right so I'll recount the process here maybe a bit more comprehensively. In order for this to work a number of classes have to be overridden: ClientCredentials ClientCredentialsSecurityTokenManager WSSecurityTokenizer The idea is that we need to create a custom ClientCredential class to hold the custom properties so they can be set from the UI or via configuration settings. The TokenManager and Tokenizer are mainly required to allow the custom credentials class to flow through the WCF pipeline and eventually provide custom serialization. Here are the three classes required and their full implementations:public class CustomCredentials : ClientCredentials { public CustomCredentials() { } protected CustomCredentials(CustomCredentials cc) : base(cc) { } public override System.IdentityModel.Selectors.SecurityTokenManager CreateSecurityTokenManager() { return new CustomSecurityTokenManager(this); } protected override ClientCredentials CloneCore() { return new CustomCredentials(this); } } public class CustomSecurityTokenManager : ClientCredentialsSecurityTokenManager { public CustomSecurityTokenManager(CustomCredentials cred) : base(cred) { } public override System.IdentityModel.Selectors.SecurityTokenSerializer CreateSecurityTokenSerializer(System.IdentityModel.Selectors.SecurityTokenVersion version) { return new CustomTokenSerializer(System.ServiceModel.Security.SecurityVersion.WSSecurity11); } } public class CustomTokenSerializer : WSSecurityTokenSerializer { public CustomTokenSerializer(SecurityVersion sv) : base(sv) { } protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); // in this case password is plain text // for digest mode password needs to be encoded as: // PasswordAsDigest = Base64(SHA-1(Nonce + Created + Password)) // and profile needs to change to //string password = GetSHA1String(nonce + createdStr + userToken.Password); string password = userToken.Password; writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } protected string GetSHA1String(string phrase) { SHA1CryptoServiceProvider sha1Hasher = new SHA1CryptoServiceProvider(); byte[] hashedDataBytes = sha1Hasher.ComputeHash(Encoding.UTF8.GetBytes(phrase)); return Convert.ToBase64String(hashedDataBytes); } } Realistically only the CustomTokenSerializer has any significant code in. The code there deals with actually serializing the custom credentials using low level XML semantics by writing output into an XML writer. I can't take credit for this code - most of the code comes from the MSDN forum post mentioned earlier - I made a few adjustments to simplify the nonce generation and also added some notes to allow for PasswordDigest generation. Per spec the nonce is nothing more than a unique value that's supposed to be 'random'. I'm thinking that this value can be any string that's unique and a GUID on its own probably would have sufficed. Comments on other posts that GUIDs can be potentially guessed are highly exaggerated to say the least IMHO. To satisfy even that aspect though I added the SHA1 encryption and binary decoding to give a more random value that would be impossible to 'guess'. The original example from the forum post used another level of encoding and decoding to string in between - but that really didn't accomplish anything but extra overhead. The header output generated from this looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-f43d8b0d-0ebb-482e-998d-f544401a3c91-1" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">ThePassword</o:Password> <o:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >PjVE24TC6HtdAnsf3U9c5WMsECY=</o:Nonce> <u:Created>2012-11-23T07:10:04.670Z</u:Created> </o:UsernameToken> </o:Security> </s:Header> which is exactly as it should be. Password Digest? In my case the password is passed in plain text over an SSL connection, so there's no digest required so I was done with the code above. Since I don't have a service handy that requires a password digest,  I had no way of testing the code for the digest implementation, but here is how this is likely to work. If you need to pass a digest encoded password things are a little bit trickier. The password type namespace needs to change to: http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest and then the password value needs to be encoded. The format for password digest encoding is this: Base64(SHA-1(Nonce + Created + Password)) and it can be handled in the code above with this code (that's commented in the snippet above): string password = GetSHA1String(nonce + createdStr + userToken.Password); The entire WriteTokenCore method for digest code looks like this:protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); string password = GetSHA1String(nonce + createdStr + userToken.Password); writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } I had no service to connect to to try out Digest auth - if you end up needing it and get it to work please drop a comment… How to use the custom Credentials The easiest way to use the custom credentials is to create the client in code. Here's a factory method I use to create an instance of my service client:  public static RealTimeOnlineClient CreateRealTimeOnlineProxy(string url, string username, string password) { if (string.IsNullOrEmpty(url)) url = "https://notrealurl.com:443/cows/services/RealTimeOnline"; CustomBinding binding = new CustomBinding(); var security = TransportSecurityBindingElement.CreateUserNameOverTransportBindingElement(); security.IncludeTimestamp = false; security.DefaultAlgorithmSuite = SecurityAlgorithmSuite.Basic256; security.MessageSecurityVersion = MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10; var encoding = new TextMessageEncodingBindingElement(); encoding.MessageVersion = MessageVersion.Soap11; var transport = new HttpsTransportBindingElement(); transport.MaxReceivedMessageSize = 20000000; // 20 megs binding.Elements.Add(security); binding.Elements.Add(encoding); binding.Elements.Add(transport); RealTimeOnlineClient client = new RealTimeOnlineClient(binding, new EndpointAddress(url)); // to use full client credential with Nonce uncomment this code: // it looks like this might not be required - the service seems to work without it client.ChannelFactory.Endpoint.Behaviors.Remove<System.ServiceModel.Description.ClientCredentials>(); client.ChannelFactory.Endpoint.Behaviors.Add(new CustomCredentials()); client.ClientCredentials.UserName.UserName = username; client.ClientCredentials.UserName.Password = password; return client; } This returns a service client that's ready to call other service methods. The key item in this code is the ChannelFactory endpoint behavior modification that that first removes the original ClientCredentials and then adds the new one. The ClientCredentials property on the client is read only and this is the way it has to be added.   Summary It's a bummer that WCF doesn't suport WSE Security authentication with nonce values out of the box. From reading the comments in posts/articles while I was trying to find a solution, I found that this feature was omitted by design as this protocol is considered unsecure. While I agree that plain text passwords are rarely a good idea even if they go over secured SSL connection as WSE Security does, there are unfortunately quite a few services (mosly Java services I suspect) that use this protocol. I've run into this twice now and trying to find a solution online I can see that this is not an isolated problem - many others seem to have struggled with this. It seems there are about a dozen questions about this on StackOverflow all with varying incomplete answers. Hopefully this post provides a little more coherent content in one place. Again I marvel at WCF and its breadth of support for protocol features it has in a single tool. And even when it can't handle something there are ways to get it working via extensibility. But at the same time I marvel at how freaking difficult it is to arrive at these solutions. I mean there's no way I could have ever figured this out on my own. It takes somebody working on the WCF team or at least being very, very intricately involved in the innards of WCF to figure out the interconnection of the various objects to do this from scratch. Luckily this is an older problem that has been discussed extensively online and I was able to cobble together a solution from the online content. I'm glad it worked out that way, but it feels dirty and incomplete in that there's a whole learning path that was omitted to get here… Man am I glad I'm not dealing with SOAP services much anymore. REST service security - even when using some sort of federation is a piece of cake by comparison :-) I'm sure once standards bodies gets involved we'll be right back in security standard hell…© Rick Strahl, West Wind Technologies, 2005-2012Posted in WCF  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.NET Podcast Show #148 - ASP.NET WebForms to build a Mobile Web Application

    - by Wallym
    Check the podcast site for the original url. This is the video and source code for an ASP.NET WebForms app that I wrote that is optimized for the iPhone and mobile environments.  Subscribe to everything. Subscribe to WMV. Subscribe to M4V for iPhone/iPad. Subscribe to MP3. Download WMV. Download M4V for iPhone/iPad. Download MP3. Link to iWebKit. Source Code: <%@ Page Title="MapSplore" Language="C#" MasterPageFile="iPhoneMaster.master" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="AT_iPhone_Default" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" Runat="Server"></asp:Content><asp:Content ID="Content2" ContentPlaceHolderID="Content" Runat="Server" ClientIDMode="Static">    <asp:ScriptManager ID="sm" runat="server"         EnablePartialRendering="true" EnableHistory="false" EnableCdn="true" />    <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=true"></script>    <script  language="javascript"  type="text/javascript">    <!--    Sys.WebForms.PageRequestManager.getInstance().add_endRequest(endRequestHandle);    function endRequestHandle(sender, Args) {        setupMapDiv();        setupPlaceIveBeen();    }    function setupPlaceIveBeen() {        var mapPlaceIveBeen = document.getElementById('divPlaceIveBeen');        if (mapPlaceIveBeen != null) {            var PlaceLat = document.getElementById('<%=hdPlaceIveBeenLatitude.ClientID %>').value;            var PlaceLon = document.getElementById('<%=hdPlaceIveBeenLongitude.ClientID %>').value;            var PlaceTitle = document.getElementById('<%=lblPlaceIveBeenName.ClientID %>').innerHTML;            var latlng = new google.maps.LatLng(PlaceLat, PlaceLon);            var myOptions = {                zoom: 14,                center: latlng,                mapTypeId: google.maps.MapTypeId.ROADMAP            };            var map = new google.maps.Map(mapPlaceIveBeen, myOptions);            var marker = new google.maps.Marker({                position: new google.maps.LatLng(PlaceLat, PlaceLon),                map: map,                title: PlaceTitle,                clickable: false            });        }    }    function setupMapDiv() {        var mapdiv = document.getElementById('divImHere');        if (mapdiv != null) {            var PlaceLat = document.getElementById('<%=hdPlaceLat.ClientID %>').value;            var PlaceLon = document.getElementById('<%=hdPlaceLon.ClientID %>').value;            var PlaceTitle = document.getElementById('<%=hdPlaceTitle.ClientID %>').value;            var latlng = new google.maps.LatLng(PlaceLat, PlaceLon);            var myOptions = {                zoom: 14,                center: latlng,                mapTypeId: google.maps.MapTypeId.ROADMAP            };            var map = new google.maps.Map(mapdiv, myOptions);            var marker = new google.maps.Marker({                position: new google.maps.LatLng(PlaceLat, PlaceLon),                map: map,                title: PlaceTitle,                clickable: false            });        }     }    -->    </script>    <asp:HiddenField ID="Latitude" runat="server" />    <asp:HiddenField ID="Longitude" runat="server" />    <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js%22%3E%3C/script>    <script language="javascript" type="text/javascript">        $(document).ready(function () {            GetLocation();            setupMapDiv();            setupPlaceIveBeen();        });        function GetLocation() {            if (navigator.geolocation != null) {                navigator.geolocation.getCurrentPosition(getData);            }            else {                var mess = document.getElementById('<%=Message.ClientID %>');                mess.innerHTML = "Sorry, your browser does not support geolocation. " +                    "Try the latest version of Safari on the iPhone, Android browser, or the latest version of FireFox.";            }        }        function UpdateLocation_Click() {            GetLocation();        }        function getData(position) {            var latitude = position.coords.latitude;            var longitude = position.coords.longitude;            var hdLat = document.getElementById('<%=Latitude.ClientID %>');            var hdLon = document.getElementById('<%=Longitude.ClientID %>');            hdLat.value = latitude;            hdLon.value = longitude;        }    </script>    <asp:Label ID="Message" runat="server" />    <asp:UpdatePanel ID="upl" runat="server">        <ContentTemplate>    <asp:Panel ID="pnlStart" runat="server" Visible="true">    <div id="topbar">        <div id="title">MapSplore</div>    </div>    <div id="content">        <ul class="pageitem">            <li class="menu">                <asp:LinkButton ID="lbLocalDeals" runat="server" onclick="lbLocalDeals_Click">                <asp:Image ID="imLocalDeals" runat="server" ImageUrl="~/Images/ArtFavor_Money_Bag_Icon.png" Height="30" />                <span class="name">Local Deals.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbLocalPlaces" runat="server" onclick="lbLocalPlaces_Click">                <asp:Image ID="imLocalPlaces" runat="server" ImageUrl="~/Images/Andy_Houses_on_the_horizon_-_Starburst_remix.png" Height="30" />                <span class="name">Local Places.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbWhereIveBeen" runat="server" onclick="lbWhereIveBeen_Click">                <asp:Image ID="imImHere" runat="server" ImageUrl="~/Images/ryanlerch_flagpole.png" Height="30" />                <span class="name">I've been here.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbMyStats" runat="server">                <asp:Image ID="imMyStats" runat="server" ImageUrl="~/Images/Anonymous_Spreadsheet.png" Height="30" />                <span class="name">My Stats.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbAddAPlace" runat="server" onclick="lbAddAPlace_Click">                <asp:Image ID="imAddAPlace" runat="server" ImageUrl="~/Images/jean_victor_balin_add.png" Height="30" />                <span class="name">Add a Place.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="button">                <input type="button" value="Update Your Current Location" onclick="UpdateLocation_Click()">                </li>        </ul>    </div>    </asp:Panel>    <div>    <asp:Panel ID="pnlCoupons" runat="server" Visible="false">        <div id="topbar">        <div id="title">MapSplore</div>        <div id="leftbutton">            <asp:LinkButton runat="server" Text="Return"                 ID="ReturnFromDeals" OnClick="ReturnFromDeals_Click" /></div></div>    <div class="content">    <asp:ListView ID="lvCoupons" runat="server">        <LayoutTemplate>            <ul class="pageitem" runat="server">                <asp:PlaceHolder ID="itemPlaceholder" runat="server" />            </ul>        </LayoutTemplate>        <ItemTemplate>            <li class="menu">                <asp:LinkButton ID="lbBusiness" runat="server" Text='<%#Eval("Place.Name") %>' OnClick="lbBusiness_Click">                    <span class="comment">                    <asp:Label ID="lblAddress" runat="server" Text='<%#Eval("Place.Address1") %>' />                    <asp:Label ID="lblDis" runat="server" Text='<%# Convert.ToString(Convert.ToInt32(Eval("Place.Distance"))) + " meters" %>' CssClass="smallText" />                    <asp:HiddenField ID="hdPlaceId" runat="server" Value='<%#Eval("PlaceId") %>' />                    <asp:HiddenField ID="hdGeoPromotionId" runat="server" Value='<%#Eval("GeoPromotionId") %>' />                    </span>                    <span class="arrow"></span>                </asp:LinkButton></li></ItemTemplate></asp:ListView><asp:GridView ID="gvCoupons" runat="server" AutoGenerateColumns="false">            <HeaderStyle BackColor="Silver" />            <AlternatingRowStyle BackColor="Wheat" />            <Columns>                <asp:TemplateField AccessibleHeaderText="Business" HeaderText="Business">                    <ItemTemplate>                        <asp:Image ID="imPlaceType" runat="server" Text='<%#Eval("Type") %>' ImageUrl='<%#Eval("Image") %>' />                        <asp:LinkButton ID="lbBusiness" runat="server" Text='<%#Eval("Name") %>' OnClick="lbBusiness_Click" />                        <asp:LinkButton ID="lblAddress" runat="server" Text='<%#Eval("Address1") %>' CssClass="smallText" />                        <asp:Label ID="lblDis" runat="server" Text='<%# Convert.ToString(Convert.ToInt32(Eval("Distance"))) + " meters" %>' CssClass="smallText" />                        <asp:HiddenField ID="hdPlaceId" runat="server" Value='<%#Eval("PlaceId") %>' />                        <asp:HiddenField ID="hdGeoPromotionId" runat="server" Value='<%#Eval("GeoPromotionId") %>' />                        <asp:Label ID="lblInfo" runat="server" Visible="false" />                    </ItemTemplate>                </asp:TemplateField>            </Columns>        </asp:GridView>    </div>    </asp:Panel>    <asp:Panel ID="pnlPlaces" runat="server" Visible="false">    <div id="topbar">        <div id="title">            MapSplore</div><div id="leftbutton">            <asp:LinkButton runat="server" Text="Return"                 ID="ReturnFromPlaces" OnClick="ReturnFromPlaces_Click" /></div></div>        <div id="content">        <asp:ListView ID="lvPlaces" runat="server">            <LayoutTemplate>                <ul id="ulPlaces" class="pageitem" runat="server">                    <asp:PlaceHolder ID="itemPlaceholder" runat="server" />                    <li class="menu">                        <asp:LinkButton ID="lbNotListed" runat="server" CssClass="name"                            OnClick="lbNotListed_Click">                            Place not listed                            <span class="arrow"></span>                            </asp:LinkButton>                    </li>                </ul>            </LayoutTemplate>            <ItemTemplate>            <li class="menu">                <asp:LinkButton ID="lbImHere" runat="server" CssClass="name"                     OnClick="lbImHere_Click">                <%#DisplayName(Eval("Name")) %>&nbsp;                <%# Convert.ToString(Convert.ToInt32(Eval("Distance"))) + " meters" %>                <asp:HiddenField ID="hdPlaceId" runat="server" Value='<%#Eval("PlaceId") %>' />                <span class="arrow"></span>                </asp:LinkButton></li></ItemTemplate></asp:ListView>    </div>    </asp:Panel>    <asp:Panel ID="pnlImHereNow" runat="server" Visible="false">        <div id="topbar">        <div id="title">            MapSplore</div><div id="leftbutton">            <asp:LinkButton runat="server" Text="Places"                 ID="lbImHereNowReturn" OnClick="lbImHereNowReturn_Click" /></div></div>            <div id="rightbutton">            <asp:LinkButton runat="server" Text="Beginning"                ID="lbBackToBeginning" OnClick="lbBackToBeginning_Click" />            </div>        <div id="content">        <ul class="pageitem">        <asp:HiddenField ID="hdPlaceId" runat="server" />        <asp:HiddenField ID="hdPlaceLat" runat="server" />        <asp:HiddenField ID="hdPlaceLon" runat="server" />        <asp:HiddenField ID="hdPlaceTitle" runat="server" />        <asp:Button ID="btnImHereNow" runat="server"             Text="I'm here" OnClick="btnImHereNow_Click" />             <asp:Label ID="lblPlaceTitle" runat="server" /><br />        <asp:TextBox ID="txtWhatsHappening" runat="server" TextMode="MultiLine" Rows="2" style="width:300px" /><br />        <div id="divImHere" style="width:300px; height:300px"></div>        </div>        </ul>    </asp:Panel>    <asp:Panel runat="server" ID="pnlIveBeenHere" Visible="false">        <div id="topbar">        <div id="title">            Where I've been</div><div id="leftbutton">            <asp:LinkButton ID="lbIveBeenHereBack" runat="server" Text="Back" OnClick="lbIveBeenHereBack_Click" /></div></div>        <div id="content">        <asp:ListView ID="lvWhereIveBeen" runat="server">            <LayoutTemplate>                <ul id="ulWhereIveBeen" class="pageitem" runat="server">                    <asp:PlaceHolder ID="itemPlaceholder" runat="server" />                </ul>            </LayoutTemplate>            <ItemTemplate>            <li class="menu" runat="server">                <asp:LinkButton ID="lbPlaceIveBeen" runat="server" OnClick="lbPlaceIveBeen_Click" CssClass="name">                    <asp:Label ID="lblPlace" runat="server" Text='<%#Eval("PlaceName") %>' /> at                    <asp:Label ID="lblTime" runat="server" Text='<%#Eval("ATTime") %>' CssClass="content" />                    <asp:HiddenField ID="hdATID" runat="server" Value='<%#Eval("ATID") %>' />                    <span class="arrow"></span>                </asp:LinkButton>            </li>            </ItemTemplate>        </asp:ListView>        </div>        </asp:Panel>    <asp:Panel runat="server" ID="pnlPlaceIveBeen" Visible="false">        <div id="topbar">        <div id="title">            I've been here        </div>        <div id="leftbutton">            <asp:LinkButton ID="lbPlaceIveBeenBack" runat="server" Text="Back" OnClick="lbPlaceIveBeenBack_Click" />        </div>        <div id="rightbutton">            <asp:LinkButton ID="lbPlaceIveBeenBeginning" runat="server" Text="Beginning" OnClick="lbPlaceIveBeenBeginning_Click" />        </div>        </div>        <div id="content">            <ul class="pageitem">            <li>            <asp:HiddenField ID="hdPlaceIveBeenPlaceId" runat="server" />            <asp:HiddenField ID="hdPlaceIveBeenLatitude" runat="server" />            <asp:HiddenField ID="hdPlaceIveBeenLongitude" runat="server" />            <asp:Label ID="lblPlaceIveBeenName" runat="server" /><br />            <asp:Label ID="lblPlaceIveBeenAddress" runat="server" /><br />            <asp:Label ID="lblPlaceIveBeenCity" runat="server" />,             <asp:Label ID="lblPlaceIveBeenState" runat="server" />            <asp:Label ID="lblPlaceIveBeenZipCode" runat="server" /><br />            <asp:Label ID="lblPlaceIveBeenCountry" runat="server" /><br />            <div id="divPlaceIveBeen" style="width:300px; height:300px"></div>            </li>            </ul>        </div>                </asp:Panel>         <asp:Panel ID="pnlAddPlace" runat="server" Visible="false">                <div id="topbar"><div id="title">MapSplore</div><div id="leftbutton"><asp:LinkButton ID="lbAddPlaceReturn" runat="server" Text="Back" OnClick="lbAddPlaceReturn_Click" /></div><div id="rightnav"></div></div><div id="content">    <ul class="pageitem">        <li id="liPlaceAddMessage" runat="server" visible="false">        <asp:Label ID="PlaceAddMessage" runat="server" />        </li>        <li class="bigfield">        <asp:TextBox ID="txtPlaceName" runat="server" placeholder="Name of Establishment" />        </li>        <li class="bigfield">        <asp:TextBox ID="txtAddress1" runat="server" placeholder="Address 1" />        </li>        <li class="bigfield">        <asp:TextBox ID="txtCity" runat="server" placeholder="City" />        </li>        <li class="select">        <asp:DropDownList ID="ddlProvince" runat="server" placeholder="Select State" />          <span class="arrow"></span>              </li>        <li class="bigfield">        <asp:TextBox ID="txtZipCode" runat="server" placeholder="Zip Code" />        </li>        <li class="select">        <asp:DropDownList ID="ddlCountry" runat="server"             onselectedindexchanged="ddlCountry_SelectedIndexChanged" />        <span class="arrow"></span>        </li>        <li class="bigfield">        <asp:TextBox ID="txtPhoneNumber" runat="server" placeholder="Phone Number" />        </li>        <li class="checkbox">            <span class="name">You Here Now:</span> <asp:CheckBox ID="cbYouHereNow" runat="server" Checked="true" />        </li>        <li class="button">        <asp:Button ID="btnAdd" runat="server" Text="Add Place"             onclick="btnAdd_Click" />        </li>    </ul></div>        </asp:Panel>        <asp:Panel ID="pnlImHere" runat="server" Visible="false">            <asp:TextBox ID="txtImHere" runat="server"                 TextMode="MultiLine" Rows="3" Columns="40" /><br />            <asp:DropDownList ID="ddlPlace" runat="server" /><br />            <asp:Button ID="btnHere" runat="server" Text="Tell Everyone I'm Here"                 onclick="btnHere_Click" /><br />        </asp:Panel>     </div>    </ContentTemplate>    </asp:UpdatePanel> </asp:Content> Code Behind .cs file: using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using LocationDataModel; public partial class AT_iPhone_Default : ViewStatePage{    private iPhoneDevice ipd;     protected void Page_Load(object sender, EventArgs e)    {        LocationDataEntities lde = new LocationDataEntities();        if (!Page.IsPostBack)        {            var Countries = from c in lde.Countries select c;            foreach (Country co in Countries)            {                ddlCountry.Items.Add(new ListItem(co.Name, co.CountryId.ToString()));            }            ddlCountry_SelectedIndexChanged(ddlCountry, null);            if (AppleIPhone.IsIPad())                ipd = iPhoneDevice.iPad;            if (AppleIPhone.IsIPhone())                ipd = iPhoneDevice.iPhone;            if (AppleIPhone.IsIPodTouch())                ipd = iPhoneDevice.iPodTouch;        }    }    protected void btnPlaces_Click(object sender, EventArgs e)    {    }    protected void btnAdd_Click(object sender, EventArgs e)    {        bool blImHere = cbYouHereNow.Checked;        string Place = txtPlaceName.Text,            Address1 = txtAddress1.Text,            City = txtCity.Text,            ZipCode = txtZipCode.Text,            PhoneNumber = txtPhoneNumber.Text,            ProvinceId = ddlProvince.SelectedItem.Value,            CountryId = ddlCountry.SelectedItem.Value;        int iProvinceId, iCountryId;        double dLatitude, dLongitude;        DataAccess da = new DataAccess();        if ((!String.IsNullOrEmpty(ProvinceId)) &&            (!String.IsNullOrEmpty(CountryId)))        {            iProvinceId = Convert.ToInt32(ProvinceId);            iCountryId = Convert.ToInt32(CountryId);            if (blImHere)            {                dLatitude = Convert.ToDouble(Latitude.Value);                dLongitude = Convert.ToDouble(Longitude.Value);                da.StorePlace(Place, Address1, String.Empty, City,                    iProvinceId, ZipCode, iCountryId, PhoneNumber,                    dLatitude, dLongitude);            }            else            {                da.StorePlace(Place, Address1, String.Empty, City,                    iProvinceId, ZipCode, iCountryId, PhoneNumber);            }            liPlaceAddMessage.Visible = true;            PlaceAddMessage.Text = "Awesome, your place has been added. Add Another!";            txtPlaceName.Text = String.Empty;            txtAddress1.Text = String.Empty;            txtCity.Text = String.Empty;            ddlProvince.SelectedIndex = -1;            txtZipCode.Text = String.Empty;            txtPhoneNumber.Text = String.Empty;        }        else        {            liPlaceAddMessage.Visible = true;            PlaceAddMessage.Text = "Please select a State and a Country.";        }    }    protected void ddlCountry_SelectedIndexChanged(object sender, EventArgs e)    {        string CountryId = ddlCountry.SelectedItem.Value;        if (!String.IsNullOrEmpty(CountryId))        {            int iCountryId = Convert.ToInt32(CountryId);            LocationDataModel.LocationDataEntities lde = new LocationDataModel.LocationDataEntities();            var prov = from p in lde.Provinces where p.CountryId == iCountryId                        orderby p.ProvinceName select p;                        ddlProvince.Items.Add(String.Empty);            foreach (Province pr in prov)            {                ddlProvince.Items.Add(new ListItem(pr.ProvinceName, pr.ProvinceId.ToString()));            }        }        else        {            ddlProvince.Items.Clear();        }    }    protected void btnImHere_Click(object sender, EventArgs e)    {        int i = 0;        DataAccess da = new DataAccess();        double Lat = Convert.ToDouble(Latitude.Value),            Lon = Convert.ToDouble(Longitude.Value);        List<Place> lp = da.NearByLocations(Lat, Lon);        foreach (Place p in lp)        {            ListItem li = new ListItem(p.Name, p.PlaceId.ToString());            if (i == 0)            {                li.Selected = true;            }            ddlPlace.Items.Add(li);            i++;        }        pnlAddPlace.Visible = false;        pnlImHere.Visible = true;    }    protected void lbImHere_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        ListViewItem lvi = (ListViewItem)(((LinkButton)sender).Parent);        HiddenField hd = (HiddenField)lvi.FindControl("hdPlaceId");        long PlaceId = Convert.ToInt64(hd.Value);        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        Place pl = da.GetPlace(PlaceId);        pnlImHereNow.Visible = true;        pnlPlaces.Visible = false;        hdPlaceId.Value = PlaceId.ToString();        hdPlaceLat.Value = pl.Latitude.ToString();        hdPlaceLon.Value = pl.Longitude.ToString();        hdPlaceTitle.Value = pl.Name;        lblPlaceTitle.Text = pl.Name;    }    protected void btnHere_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        string WhatsH = txtImHere.Text;        long PlaceId = Convert.ToInt64(ddlPlace.SelectedValue);        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        da.StoreUserAT(UserName, PlaceId, WhatsH,            dLatitude, dLongitude);    }    protected void btnLocalCoupons_Click(object sender, EventArgs e)    {        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();     }    protected void lbBusiness_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        GridViewRow gvr = (GridViewRow)(((LinkButton)sender).Parent.Parent);        HiddenField hd = (HiddenField)gvr.FindControl("hdPlaceId");        string sPlaceId = hd.Value;        Int64 PlaceId;        if (!String.IsNullOrEmpty(sPlaceId))        {            PlaceId = Convert.ToInt64(sPlaceId);        }    }    protected void lbLocalDeals_Click(object sender, EventArgs e)    {        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        pnlCoupons.Visible = true;        pnlStart.Visible = false;        List<GeoPromotion> lgp = da.NearByDeals(dLatitude, dLongitude);        lvCoupons.DataSource = lgp;        lvCoupons.DataBind();    }    protected void lbLocalPlaces_Click(object sender, EventArgs e)    {        DataAccess da = new DataAccess();        double Lat = Convert.ToDouble(Latitude.Value);        double Lon = Convert.ToDouble(Longitude.Value);        List<LocationDataModel.Place> places = da.NearByLocations(Lat, Lon);        lvPlaces.DataSource = places;        lvPlaces.SelectedIndex = -1;        lvPlaces.DataBind();        pnlPlaces.Visible = true;        pnlStart.Visible = false;    }    protected void ReturnFromPlaces_Click(object sender, EventArgs e)    {        pnlPlaces.Visible = false;        pnlStart.Visible = true;    }    protected void ReturnFromDeals_Click(object sender, EventArgs e)    {        pnlCoupons.Visible = false;        pnlStart.Visible = true;    }    protected void btnImHereNow_Click(object sender, EventArgs e)    {        long PlaceId = Convert.ToInt32(hdPlaceId.Value);        string UserName = Membership.GetUser().UserName;        string WhatsHappening = txtWhatsHappening.Text;        double UserLat = Convert.ToDouble(Latitude.Value);        double UserLon = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        da.StoreUserAT(UserName, PlaceId, WhatsHappening,             UserLat, UserLon);    }    protected void lbImHereNowReturn_Click(object sender, EventArgs e)    {        pnlImHereNow.Visible = false;        pnlPlaces.Visible = true;    }    protected void lbBackToBeginning_Click(object sender, EventArgs e)    {        pnlStart.Visible = true;        pnlImHereNow.Visible = false;    }    protected void lbWhereIveBeen_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        pnlStart.Visible = false;        pnlIveBeenHere.Visible = true;        DataAccess da = new DataAccess();        lvWhereIveBeen.DataSource = da.UserATs(UserName, 0, 15);        lvWhereIveBeen.DataBind();    }    protected void lbIveBeenHereBack_Click(object sender, EventArgs e)    {        pnlIveBeenHere.Visible = false;        pnlStart.Visible = true;    }     protected void lbPlaceIveBeen_Click(object sender, EventArgs e)    {        LinkButton lb = (LinkButton)sender;        ListViewItem lvi = (ListViewItem)lb.Parent.Parent;        HiddenField hdATID = (HiddenField)lvi.FindControl("hdATID");        Int64 ATID = Convert.ToInt64(hdATID.Value);        DataAccess da = new DataAccess();        pnlIveBeenHere.Visible = false;        pnlPlaceIveBeen.Visible = true;        var plac = da.GetPlaceViaATID(ATID);        hdPlaceIveBeenPlaceId.Value = plac.PlaceId.ToString();        hdPlaceIveBeenLatitude.Value = plac.Latitude.ToString();        hdPlaceIveBeenLongitude.Value = plac.Longitude.ToString();        lblPlaceIveBeenName.Text = plac.Name;        lblPlaceIveBeenAddress.Text = plac.Address1;        lblPlaceIveBeenCity.Text = plac.City;        lblPlaceIveBeenState.Text = plac.Province.ProvinceName;        lblPlaceIveBeenZipCode.Text = plac.ZipCode;        lblPlaceIveBeenCountry.Text = plac.Country.Name;    }     protected void lbNotListed_Click(object sender, EventArgs e)    {        SetupAddPoint();        pnlPlaces.Visible = false;    }     protected void lbAddAPlace_Click(object sender, EventArgs e)    {        SetupAddPoint();    }     private void SetupAddPoint()    {        double lat = Convert.ToDouble(Latitude.Value);        double lon = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        var zip = da.WhereAmIAt(lat, lon);        if (zip.Count > 0)        {            var z0 = zip[0];            txtCity.Text = z0.City;            txtZipCode.Text = z0.ZipCode;            ddlProvince.ClearSelection();            if (z0.ProvinceId.HasValue == true)            {                foreach (ListItem li in ddlProvince.Items)                {                    if (li.Value == z0.ProvinceId.Value.ToString())                    {                        li.Selected = true;                        break;                    }                }            }        }        pnlAddPlace.Visible = true;        pnlStart.Visible = false;    }    protected void lbAddPlaceReturn_Click(object sender, EventArgs e)    {        pnlAddPlace.Visible = false;        pnlStart.Visible = true;        liPlaceAddMessage.Visible = false;        PlaceAddMessage.Text = String.Empty;    }    protected void lbPlaceIveBeenBack_Click(object sender, EventArgs e)    {        pnlIveBeenHere.Visible = true;        pnlPlaceIveBeen.Visible = false;            }    protected void lbPlaceIveBeenBeginning_Click(object sender, EventArgs e)    {        pnlPlaceIveBeen.Visible = false;        pnlStart.Visible = true;    }    protected string DisplayName(object val)    {        string strVal = Convert.ToString(val);         if (AppleIPhone.IsIPad())        {            ipd = iPhoneDevice.iPad;        }        if (AppleIPhone.IsIPhone())        {            ipd = iPhoneDevice.iPhone;        }        if (AppleIPhone.IsIPodTouch())        {            ipd = iPhoneDevice.iPodTouch;        }        return (iPhoneHelper.DisplayContentOnMenu(strVal, ipd));    }} iPhoneHelper.cs file: using System;using System.Collections.Generic;using System.Linq;using System.Web; public enum iPhoneDevice{    iPhone, iPodTouch, iPad}/// <summary>/// Summary description for iPhoneHelper/// </summary>/// public class iPhoneHelper{ public iPhoneHelper() {  //  // TODO: Add constructor logic here  // } // This code is stupid in retrospect. Use css to solve this problem      public static string DisplayContentOnMenu(string val, iPhoneDevice ipd)    {        string Return = val;        string Elipsis = "...";        int iPadMaxLength = 30;        int iPhoneMaxLength = 15;        if (ipd == iPhoneDevice.iPad)        {            if (Return.Length > iPadMaxLength)            {                Return = Return.Substring(0, iPadMaxLength - Elipsis.Length) + Elipsis;            }        }        else        {            if (Return.Length > iPhoneMaxLength)            {                Return = Return.Substring(0, iPhoneMaxLength - Elipsis.Length) + Elipsis;            }        }        return (Return);    }}  Source code for the ViewStatePage: using System;using System.Data;using System.Data.SqlClient;using System.Configuration;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Web.UI.HtmlControls; /// <summary>/// Summary description for BasePage/// </summary>#region Base class for a page.public class ViewStatePage : System.Web.UI.Page{     PageStatePersisterToDatabase myPageStatePersister;        public ViewStatePage()        : base()    {        myPageStatePersister = new PageStatePersisterToDatabase(this);    }     protected override PageStatePersister PageStatePersister    {        get        {            return myPageStatePersister;        }    } }#endregion #region This class will override the page persistence to store page state in a database.public class PageStatePersisterToDatabase : PageStatePersister{    private string ViewStateKeyField = "__VIEWSTATE_KEY";    private string _exNoConnectionStringFound = "No Database Configuration information is in the web.config.";     public PageStatePersisterToDatabase(Page page)        : base(page)    {    }     public override void Load()    {         // Get the cache key from the web form data        System.Int64 key = Convert.ToInt64(Page.Request.Params[ViewStateKeyField]);         Pair state = this.LoadState(key);         // Abort if cache object is not of type Pair        if (state == null)            throw new ApplicationException("Missing valid " + ViewStateKeyField);         // Set view state and control state        ViewState = state.First;        ControlState = state.Second;    }     public override void Save()    {         // No processing needed if no states available        if (ViewState == null && ControlState != null)            return;         System.Int64 key;        IStateFormatter formatter = this.StateFormatter;        Pair statePair = new Pair(ViewState, ControlState);         // Serialize the statePair object to a string.        string serializedState = formatter.Serialize(statePair);         // Save the ViewState and get a unique identifier back.        key = SaveState(serializedState);         // Register hidden field to store cache key in        // Page.ClientScript does not work properly with Atlas.        //Page.ClientScript.RegisterHiddenField(ViewStateKeyField, key.ToString());        ScriptManager.RegisterHiddenField(this.Page, ViewStateKeyField, key.ToString());    }     private System.Int64 SaveState(string PageState)    {        System.Int64 i64Key = 0;        string strConn = String.Empty,            strProvider = String.Empty;         string strSql = "insert into tblPageState ( SerializedState ) values ( '" + SqlEscape(PageState) + "');select scope_identity();";        SqlConnection sqlCn;        SqlCommand sqlCm;        try        {            GetDBConnectionString(ref strConn, ref strProvider);            sqlCn = new SqlConnection(strConn);            sqlCm = new SqlCommand(strSql, sqlCn);            sqlCn.Open();            i64Key = Convert.ToInt64(sqlCm.ExecuteScalar());            if (sqlCn.State != ConnectionState.Closed)            {                sqlCn.Close();            }            sqlCn.Dispose();            sqlCm.Dispose();        }        finally        {            sqlCn = null;            sqlCm = null;        }        return i64Key;    }     private Pair LoadState(System.Int64 iKey)    {        string strConn = String.Empty,            strProvider = String.Empty,            SerializedState = String.Empty,            strMinutesInPast = GetMinutesInPastToDelete();        Pair PageState;        string strSql = "select SerializedState from tblPageState where tblPageStateID=" + iKey.ToString() + ";" +            "delete from tblPageState where DateUpdated<DateAdd(mi, " + strMinutesInPast + ", getdate());";        SqlConnection sqlCn;        SqlCommand sqlCm;        try        {            GetDBConnectionString(ref strConn, ref strProvider);            sqlCn = new SqlConnection(strConn);            sqlCm = new SqlCommand(strSql, sqlCn);             sqlCn.Open();            SerializedState = Convert.ToString(sqlCm.ExecuteScalar());            IStateFormatter formatter = this.StateFormatter;             if ((null == SerializedState) ||                (String.Empty == SerializedState))            {                throw (new ApplicationException("No ViewState records were returned."));            }             // Deserilize returns the Pair object that is serialized in            // the Save method.            PageState = (Pair)formatter.Deserialize(SerializedState);             if (sqlCn.State != ConnectionState.Closed)            {                sqlCn.Close();            }            sqlCn.Dispose();            sqlCm.Dispose();        }        finally        {            sqlCn = null;            sqlCm = null;        }        return PageState;    }     private string SqlEscape(string Val)    {        string ReturnVal = String.Empty;        if (null != Val)        {            ReturnVal = Val.Replace("'", "''");        }        return (ReturnVal);    }    private void GetDBConnectionString(ref string ConnectionStringValue, ref string ProviderNameValue)    {        if (System.Configuration.ConfigurationManager.ConnectionStrings.Count > 0)        {            ConnectionStringValue = System.Configuration.ConfigurationManager.ConnectionStrings["ApplicationServices"].ConnectionString;            ProviderNameValue = System.Configuration.ConfigurationManager.ConnectionStrings["ApplicationServices"].ProviderName;        }        else        {            throw new ConfigurationErrorsException(_exNoConnectionStringFound);        }    }    private string GetMinutesInPastToDelete()    {        string strReturn = "-60";        if (null != System.Configuration.ConfigurationManager.AppSettings["MinutesInPastToDeletePageState"])        {            strReturn = System.Configuration.ConfigurationManager.AppSettings["MinutesInPastToDeletePageState"].ToString();        }        return (strReturn);    }}#endregion AppleiPhone.cs file: using System;using System.Collections.Generic;using System.Linq;using System.Web; /// <summary>/// Summary description for AppleIPhone/// </summary>public class AppleIPhone{ public AppleIPhone() {  //  // TODO: Add constructor logic here  // }     static public bool IsIPhoneOS()    {        return (IsIPad() || IsIPhone() || IsIPodTouch());    }     static public bool IsIPhone()    {        return IsTest("iPhone");    }     static public bool IsIPodTouch()    {        return IsTest("iPod");    }     static public bool IsIPad()    {        return IsTest("iPad");    }     static private bool IsTest(string Agent)    {        bool bl = false;        string ua = HttpContext.Current.Request.UserAgent.ToLower();        try        {            bl = ua.Contains(Agent.ToLower());        }        catch { }        return (bl);        }} Master page .cs: using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls; public partial class MasterPages_iPhoneMaster : System.Web.UI.MasterPage{    protected void Page_Load(object sender, EventArgs e)    {            HtmlHead head = Page.Header;            HtmlMeta meta = new HtmlMeta();            if (AppleIPhone.IsIPad() == true)            {                meta.Content = "width=400,user-scalable=no";                head.Controls.Add(meta);             }            else            {                meta.Content = "width=device-width, user-scalable=no";                meta.Attributes.Add("name", "viewport");            }            meta.Attributes.Add("name", "viewport");            head.Controls.Add(meta);            HtmlLink cssLink = new HtmlLink();            HtmlGenericControl script = new HtmlGenericControl("script");            script.Attributes.Add("type", "text/javascript");            script.Attributes.Add("src", ResolveUrl("~/Scripts/iWebKit/javascript/functions.js"));            head.Controls.Add(script);            cssLink.Attributes.Add("rel", "stylesheet");            cssLink.Attributes.Add("href", ResolveUrl("~/Scripts/iWebKit/css/style.css") );            cssLink.Attributes.Add("type", "text/css");            head.Controls.Add(cssLink);            HtmlGenericControl jsLink = new HtmlGenericControl("script");            //jsLink.Attributes.Add("type", "text/javascript");            //jsLink.Attributes.Add("src", ResolveUrl("~/Scripts/jquery-1.4.1.min.js") );            //head.Controls.Add(jsLink);            HtmlLink appleIcon = new HtmlLink();            appleIcon.Attributes.Add("rel", "apple-touch-icon");            appleIcon.Attributes.Add("href", ResolveUrl("~/apple-touch-icon.png"));            HtmlMeta appleMobileWebAppStatusBarStyle = new HtmlMeta();            appleMobileWebAppStatusBarStyle.Attributes.Add("name", "apple-mobile-web-app-status-bar-style");            appleMobileWebAppStatusBarStyle.Attributes.Add("content", "black");            head.Controls.Add(appleMobileWebAppStatusBarStyle);    }     internal string FindPath(string Location)    {        string Url = Server.MapPath(Location);        return (Url);    }}

    Read the article

  • Security in OBIEE 11g, Part 2

    - by Rob Reynolds
    Continuing the series on OBIEE 11g, our guest blogger this week is Pravin Janardanam. Here is Part 2 of his overview of Security in OBIEE 11g. OBIEE 11g Security Overview, Part 2 by Pravin Janardanam In my previous blog on Security, I discussed the OBIEE 11g changes regarding Authentication mechanism, RPD protection and encryption. This blog will include a discussion about OBIEE 11g Authorization and other Security aspects. Authorization: Authorization in 10g was achieved using a combination of Users, Groups and association of privileges and object permissions to users and Groups. Two keys changes to Authorization in OBIEE 11g are: Application Roles Policies / Permission Groups Application Roles are introduced in OBIEE 11g. An application role is specific to the application. They can be mapped to other application roles defined in the same application scope and also to enterprise users or groups, and they are used in authorization decisions. Application roles in 11g take the place of Groups in 10g within OBIEE application. In OBIEE 10g, any changes to corporate LDAP groups require a corresponding change to Groups and their permission assignment. In OBIEE 11g, Application roles provide insulation between permission definitions and corporate LDAP Groups. Permissions are defined at Application Role level and changes to LDAP groups just require a reassignment of the Group to the Application Roles. Permissions and privileges are assigned to Application Roles and users in OBIEE 11g compared to Groups and Users in 10g. The diagram below shows the relationship between users, groups and application roles. Note that the Groups shown in the diagram refer to LDAP Groups (WebLogic Groups by default) and not OBIEE application Groups. The following screenshot compares the permission windows from Admin tool in 10g vs 11g. Note that the Groups in the OBIEE 10g are replaced with Application Roles in OBIEE 11g. The same is applicable to OBIEE web catalog objects.    The default Application Roles available after OBIEE 11g installation are BIAdministrator, BISystem, BIConsumer and BIAuthor. Application policies are the authorization policies that an application relies upon for controlling access to its resources. An Application Role is defined by the Application Policy. The following screenshot shows the policies defined for BIAdministrator and BISystem Roles. Note that the permission for impersonation is granted to BISystem Role. In OBIEE 10g, the permission to manage repositories and Impersonation were assigned to “Administrators” group with no control to separate these permissions in the Administrators group. Hence user “Administrator” also had the permission to impersonate. In OBI11g, BIAdministrator does not have the permission to impersonate. This gives more flexibility to have multiple users perform different administrative functions. Application Roles, Policies, association of Policies to application roles and association of users and groups to application roles are managed using Fusion Middleware Enterprise Manager (FMW EM). They reside in the policy store, identified by the system-jazn-data.xml file. The screenshots below show where they are created and managed in FMW EM. The following screenshot shows the assignment of WebLogic Groups to Application Roles. The following screenshot shows the assignment of Permissions to Application Roles (Application Policies). Note: Object level permission association to Applications Roles resides in the RPD for repository objects. Permissions and Privilege for web catalog objects resides in the OBIEE Web Catalog. Wherever Groups were used in the web catalog and RPD has been replaced with Application roles in OBIEE 11g. Following are the tools used in OBIEE 11g Security Administration: ·       Users and Groups are managed in Oracle WebLogic Administration console (by default). If WebLogic is integrated with other LDAP products, then Users and Groups needs to managed using the interface provide by the respective LDAP vendor – New in OBIEE 11g ·       Application Roles and Application Policies are managed in Oracle Enterprise Manager - Fusion Middleware Control – New in OBIEE 11g ·       Repository object permissions are managed in OBIEE Administration tool – Same as 10g but the assignment is to Application Roles instead of Groups ·       Presentation Services Catalog Permissions and Privileges are managed in OBI Application administration page - Same as 10g but the assignment is to Application Roles instead of Groups Credential Store: Credential Store is a single consolidated service provider to store and manage the application credentials securely. The credential store contains credentials that either user supplied or system generated. Credential store in OBIEE 10g is file based and is managed using cryptotools utility. In 11g, Credential store can be managed directly from the FMW Enterprise Manager and is stored in cwallet.sso file. By default, the Credential Store stores password for deployed RPDs, BI Publisher data sources and BISystem user. In addition, Credential store can be LDAP based but only Oracle Internet Directory is supported right now. As you can see OBIEE security is integrated with Oracle Fusion Middleware security architecture. This provides a common security framework for all components of Business Intelligence and Fusion Middleware applications.

    Read the article

  • Remove Office 2010 Beta and Reinstall Office 2007

    - by Matthew Guay
    Have you tried out the Office 2010 beta, but want to go back to Office 2007?  Here’s a step-by-step tutorial on how to remove your Office 2010 beta and reinstall your Office 2007. The Office 2010 beta will expire on October 31, 2010, at which time you may see a dialog like the one below.  At that time, you will need to either upgrade to the final release of Office 2010, or reinstall your previous version of Office. Our computer was running the Office 2010 Home and Business Click to Run beta, and after uninstalling it we reinstalled Office 2007 Home and Student.  This was a Windows Vista computer, but the process will be exactly the same on Windows XP, Vista, or Windows 7.  Additionally, the process to reinstall Office 2007 will be exactly the same regardless of the edition of Office 2007 you’re using. However, please note that if you are running a different edition of Office 2010, especially the 64 bit version, the process may be slightly different.  We will cover this scenario in another article. Remove Office 2010 Click to Run Beta: To remove Office 2010 Click to Run Beta, open Control Panel and select Uninstall a Program. If your computer is running Windows 7, enter “Uninstall a program” in your Start menu search. Scroll down, select “Microsoft Office Click-to-Run 2010 (Beta)”, and click the Uninstall button on the toolbar.  Note that there will be two entries for Office, so make sure to select the “Click-to-Run” entry. This will automatically remove all of Office 2010 and its components.  Click Yes to confirm you want to remove it. Office 2010 beta uninstalled fairly quickly, and a reboot will be required.  Once your computer is rebooted, Office 2010 will be entirely removed. Reinstall Office 2007 Now, you’re to the easy part.  Simply insert your Office 2007 CD, and it should automatically startup the setup.  If not, open Computer and double-click on your CD drive.   Now, double-click on setup.exe to start the installation. Enter your product key, and click Continue…   Click Install Now, or click Customize if you want to change the default installation settings. Wait while Office 2007 installs…it takes around 15 to 20 minutes in our experience.  Once it’s finished  close the installer. Now, open one of the Office applications.  A popup will open asking you to activate Office.  Make sure you’re connected to the internet, and click next; otherwise, you can select to activate over the phone if you do not have internet access. This should only take a minute, and Office 2007 will be activated and ready to run. Everything should work just as it did before you installed Office 2010.  Enjoy! Office Updates Make sure to install the latest updates for Office 2007, as these are not included in your disk.  Check Windows Update (search for Windows Update in the Start menu search), and install all of the available updates for Office 2007, including Service Pack 2. Conclusion This is a great way to keep using Office even if you don’t decide to purchase Office 2010 after it is released.  Additionally, if you’re were using another version of Office, such as Office 2003, then reinstall it as normal after following the steps to remove Office 2010. Similar Articles Productive Geek Tips Add or Remove Apps from the Microsoft Office 2007 or 2010 SuiteDetect and Repair Applications In Microsoft Office 2007Save and Restore Your Microsoft Office SettingsDisable Office 2010 Beta Send-a-Smile from StartupHow to See the About Dialog and Version Information in Office 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 If Web Browsers Were Modes of Transportation Google Translate (for animals) Out of 100 Tweeters Roadkill’s Scan Port scans for open ports Out of band Security Update for Internet Explorer 7 Cool Looking Screensavers for Windows

    Read the article

  • Ask How-To Geek: Dropbox in the Start Menu, Understanding Symlinks, and Ripping TV Series DVDs

    - by Jason Fitzpatrick
    This week we take a look at how to incorporate Dropbox into your Windows Start Menu, understanding and using symbolic links, and how to rip your TV series DVDs right to unique and high-quality episode files. Once a week we dip into our reader mailbag and help readers solve their problems, sharing the useful solutions with you in the process. Read on to see our fixes for this week’s reader dilemmas. Add Drobox to Your Start Menu Dear How-To Geek, I use Dropbox all the time and would like to add it right onto my start menu along side the other major shortcuts like Documents, Pictures, etc. It seems like adding Dropbox into the menu should be part of the Dropbox installation package! Sincerely, Dropboxing in Des Moines Dear Dropboxing, We agree, it would be a nice installation option. As it stands you’re going to have to do a little simple hacking to get Dropbox nestled neatly into your start menu. The hack isn’t super elegant but when you’re done you’ll have the link you want and it’ll look like it was there all along. Check out this step-by-step guide here in order to take an existing Library shortcut and rework it to be a Dropbox link. Understanding and Using Symbolic Links Dear How-To Geek, I was talking to a coworker the other day about an issue I’d been having with a media center application I’m running. He suggested using symbolic links to better organize my media and make it easier for the application to access my collection. I had no idea what he was talking about and never got a chance to bug him about it later. Can you clear up this whole symbolic links business for me? I’ve been using computers for years and I’ve never even heard of it! Sincerely, Symbolic Who? Dear Symbolic, Symbolic links aren’t commonly used by many Windows users which is why you likely haven’t run into the concept. Symbolic links are essentially supercharged shortcuts—the newly introduced Windows library system is really just a type of symbolic link system. You can use symbolic links to do all sorts of neat stuff like link folders to your Dropbox folder, organize media, and more. The concept of symbolic links is pretty simple but the execution can be really tricky. We’d suggest reading over our guide to creating symbolic links in Windows 7, Windows XP, and Ubunutu to get a clearer idea what you’re getting into. Rip Your TV DVDs into Handy Episode Files Dear How-To Geek, My wife got me an iPod for Christmas and I still haven’t got around to filling it up. I have tons of entire TV show seasons on DVD and would like to get them on the iPod but I have absolutely no idea where to start. How do I get the shows off the discs? I thought it would be as easy to import the TV shows into iTunes as it is to import tracks off a CD but I was totally wrong. I tried downloading some applications to rip them but those didn’t work at all. Very frustrating! Surely there is an easy and/or automated way to do this, right? Sincerely, Free My DVDs Dear DVDs, Oh man is this a frustration we can relate to. It’s inordinately difficult to get movies and TV shows off physical media and into digital (and portable media player-friendly) formats. There are a multitude of ways to rip DVDs and quite a few applications out there (some good, some mediocre, and some outright malware). We’d recommend a two-part punch to solve your ripping woes. You’ll need a copy of DVDFab to strip away the protections on the discs and rip the disc and Handbrake to load the disc image and convert the files. It’s not quite as smooth as the CD-to-iTunes workflow but it’s still pretty easy. Check out all the steps and settings you’ll want to toggle here. Have a question you want to put before the How-To Geek staff? Shoot us an email at [email protected] and then keep an eye out for a solution in the Ask How-To Geek column. Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines Google’s New Personal Blocklist Extension Kills Search Engine Spam KeyCounter Tracks Your Keystrokes and Mouse Clicks Add Custom LED Ambient Lighting to Your PC or Media Center The Trackor Monitors Amazon Prices; Integrates with Chrome, Firefox, and Safari Four Awesome TRON Legacy Themes for Chrome and Iron Anger is Illogical – Old School Style Instructional Video [Star Trek Mashup]

    Read the article

  • Oracle Announces New Oracle Exastack Program for ISV Partners

    - by pfolgado
    Oracle Exastack Program Enables ISV Partners to Leverage a Scalable, Integrated Infrastructure to Deliver Their Applications Tuned and Optimized for High-Performance News Facts Enabling Independent Software Vendors (ISVs) and other members of Oracle Partner Network (OPN) to rapidly build and deliver faster, more reliable applications to end customers, Oracle today introduced Oracle Exastack Ready, available now, and Oracle Exastack Optimized, available in fall 2011 through OPN. The Oracle Exastack Program focuses on helping ISVs run their solutions on Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud -- integrated systems in which the software and hardware are engineered to work together. These products provide partners with a lower cost and high performance infrastructure for database and application workloads across on-premise and cloud based environments. Leveraging the new Oracle Exastack Program in which applications can qualify as Oracle Exastack Ready or Oracle Exastack Optimized, partners can use available OPN resources to optimize their applications to run faster and more reliably -- providing increased performance to their end users. By deploying their applications on Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud, ISVs can reduce the cost, time and support complexities typically associated with building and maintaining a disparate application infrastructure -- enabling them to focus more on their core competencies, accelerating innovation and delivering superior value to customers. After qualifying their applications as Oracle Exastack Ready, partners can note to customers that their applications run on and support Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud component products including Oracle Solaris, Oracle Linux, Oracle Database and Oracle WebLogic Server. Customers can be confident when choosing a partner's Oracle Exastack Optimized application, knowing it has been tuned by the OPN member on Oracle Exadata Database Machine or Oracle Exalogic Elastic Cloud with a goal of delivering optimum speed, scalability and reliability. Partners participating in the Oracle Exastack Program can also leverage their Oracle Exastack Ready and Oracle Exastack Optimized applications to advance to Platinum or Diamond level in OPN. Oracle Exastack Programs Provide ISVs a Reliable, High-Performance Application Infrastructure With the Oracle Exastack Program ISVs have several options to qualify and tune their applications with Oracle Exastack, including: Oracle Exastack Ready: Oracle Exastack Ready provides qualifying partners with specific branding and promotional benefits based on their adoption of Oracle products. If a partner application supports the latest major release of one of these products, the partner may use the corresponding logo with their product marketing materials: Oracle Solaris Ready, Oracle Linux Ready, Oracle Database Ready, and Oracle WebLogic Ready. Oracle Exastack Ready is available to OPN members at the Gold level or above. Additionally, OPN members participating in the program can leverage their Oracle Exastack Ready applications toward advancement to the Platinum or Diamond levels in the OPN Specialized program and toward achieving Oracle Exastack Optimized status. Oracle Exastack Optimized: When available, for OPN members at the Gold level or above, Oracle Exastack Optimized will provide direct access to Oracle technical resources and dedicated Oracle Exastack lab environments so OPN members can test and tune their applications to deliver optimal performance and scalability on Oracle Exadata Database Machine or Oracle Exalogic Elastic Cloud. Oracle Exastack Optimized will provide OPN members with specific branding and promotional benefits including the use of the Oracle Exastack Optimized logo. OPN members participating in the program will also be able to leverage their Oracle Exastack Optimized applications toward advancement to Platinum or Diamond level in the OPN Specialized program. Oracle Exastack Labs and ISV Enablement: Dedicated Oracle Exastack lab environments and related technical enablement resources (including Guided Learning Paths and Boot Camps) will be available through OPN for OPN members to further their knowledge of Oracle Exastack offerings, and qualify their applications for Oracle Exastack Optimized or Oracle Exastack Ready. Oracle Exastack labs will be available to qualifying OPN members at the Gold level or above. Partners are eligible to participate in the Oracle Exastack Ready program immediately, which will help them meet the requirements to attain Oracle Exastack Optimized status in the future. Guidelines for Oracle Exastack Optimized, as well as Oracle Exastack Labs will be available in fall 2011. Supporting Quotes "In order to effectively differentiate their software applications in the marketplace, ISVs need to rapidly deliver new capabilities and performance improvements," said Judson Althoff, Oracle senior vice president of Worldwide Alliances and Channels and Embedded Sales. "With Oracle Exastack, ISVs have the ability to optimize and deploy their applications with a complete, integrated and cloud-ready infrastructure that will help them accelerate innovation, unlock new features and functionality, and deliver superior value to customers." "We view performance as absolutely critical and a key differentiator," said Tom Stock, SVP of Product Management, GoldenSource. "As a leading provider of enterprise data management solutions for securities and investment management firms, with Oracle Exadata Database Machine, we see an opportunity to notably improve data processing performance -- providing high quality 'golden copy' data in a reduced timeframe. Achieving Oracle Exastack Optimized status will be a stamp of approval that our solution will provide the performance and scalability that our customers demand." "As a leading provider of Revenue Intelligence solutions for telecommunications, media and entertainment service providers, our customers continually demand more readily accessible, enriched and pre-analyzed information to minimize their financial risks and maximize their margins," said Alon Aginsky, President and CEO of cVidya Networks. "Oracle Exastack enables our solutions to deliver the power, infrastructure, and innovation required to transform our customers' business operations and stay ahead of the game." Supporting Resources Oracle PartnerNetwork (OPN) Oracle Exastack Oracle Exastack Datasheet Judson Althoff blog Connect with the Oracle Partner community at OPN on Facebook, OPN on LinkedIn, OPN on YouTube, or OPN on Twitter

    Read the article

  • Where is my app.config for SSIS?

    Sometimes when working with SSIS you need to add or change settings in the .NET application configuration file, which can be a bit confusing when you are building a SSIS package not an application. First of all lets review a couple of examples where you may need to do this. You are using referencing an assembly in a Script Task that uses Enterprise Library (aka EntLib), so you need to add the relevant configuration sections and settings, perhaps for the logging application block. You are using using Enterprise Library in a custom task or component, and again you need to add the relevant configuration sections and settings. You are using a web service with Microsoft Web Services Enhancements (WSE) 3.0 and hosting the proxy in SSIS, in an assembly used by your package, and need to add the configuration sections and settings. You need to change behaviours of the .NET framework which can be influenced by a configuration file, such as the System.Net.Mail default SMTP settings. Perhaps you wish to configure System.Net and the httpWebRequest header for parsing unsafe header (useUnsafeHeaderParsing), which will change the way the HTTP Connection manager behaves. You are consuming a WCF service and wish to specify the endpoint in configuration. There are no doubt plenty more examples but each of these requires us to identify the correct configuration file and and make the relevant changes. There are actually several configuration files, each used by a different execution host depending on how you are working with the SSIS package. The folders we need to look in will actually vary depending on the version of SQL Server as well as the processor architecture, but most are all what we can call the Binn folder. The SQL Server 2005 Binn folder is at C:\Program Files\Microsoft SQL Server\90\DTS\Binn\, compared to C:\Program Files\Microsoft SQL Server\100\DTS\Binn\ for SQL Server 2008. If you are on a 64-bit machine then you will see C:\Program Files (x86)\Microsoft SQL Server\90\DTS\Binn\ for the 32-bit executables and C:\Program Files\Microsoft SQL Server\90\DTS\Binn\ for 64-bit, so be sure to check all relevant locations. Of course SQL Server 2008 may have a C:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn\ on a 64-bit machine too. To recap, the version of SQL Server determines if you look in the 90 or 100 sub-folder under SQL Server in Program Files (C:\Program Files\Microsoft SQL Server\nn\) . If you are running a 64-bit operating system then you will have two instances program files, C:\Program Files (x86)\ for 32-bit and  C:\Program Files\ for 64-bit. You may wish to check both depending on what you are doing, but this is covered more under each section below. There are a total of five specific configuration files that you may need to change, each one is detailed below: DTExec.exe.config DTExec.exe is the standalone command line tool used for executing SSIS packages, and therefore it is an execution host with an app.config file. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DTExec.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. DtsDebugHost.exe.config DtsDebugHost.exe is the execution host used by Business Intelligence Development Studio (BIDS) / Visual Studio when executing a package from the designer in debug mode, which is the default behaviour. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DtsDebugHost.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. This may surprise some people as Visual Studio is only 32-bit, but thankfully the debugger supports both. This can be set in the project properties, see the Run64BitRuntime property (true or false) in the Debugging pane of the Project Properties. dtshost.exe.config dtshost.exe is the execution host used by what I think of as the built-in features of SQL Server such as SQL Server Agent e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\dtshost.exe.config This file can be found in both the 32-bit and 64-bit Binn folders devenv.exe.config Something slightly different is devenv.exe which is Visual Studio. This configuration file may also need changing if you need a feature at design-time such as in a Task Editor or Connection Manager editor. Visual Studio 2005 for SQL Server 2005  - C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\devenv.exe.config Visual Studio 2008 for SQL Server 2008  - C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe.config Visual Studio is only available for 32-bit so on a 64-bit machine you will have to look in C:\Program Files (x86)\ only. DTExecUI.exe.config The DTExec UI tool can also have a configuration file and these cab be found under the Tools folders for SQL Sever as shown below. C:\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe A configuration file may not exist, but if you can find the matching executable you know you are in the right place so can go ahead and add a new file yourself. In summary we have covered the assembly configuration files for all of the standard methods of building and running a SSIS package, but obviously if you are working programmatically you will need to make the relevant modifications to your program’s app.config as well.

    Read the article

  • SQL SERVER – Database Dynamic Caching by Automatic SQL Server Performance Acceleration

    - by pinaldave
    My second look at SafePeak’s new version (2.1) revealed to me few additional interesting features. For those of you who hadn’t read my previous reviews SafePeak and not familiar with it, here is a quick brief: SafePeak is in business of accelerating performance of SQL Server applications, as well as their scalability, without making code changes to the applications or to the databases. SafePeak performs database dynamic caching, by caching in memory result sets of queries and stored procedures while keeping all those cache correct and up to date. Cached queries are retrieved from the SafePeak RAM in microsecond speed and not send to the SQL Server. The application gets much faster results (100-500 micro seconds), the load on the SQL Server is reduced (less CPU and IO) and the application or the infrastructure gets better scalability. SafePeak solution is hosted either within your cloud servers, hosted servers or your enterprise servers, as part of the application architecture. Connection of the application is done via change of connection strings or adding reroute line in the c:\windows\system32\drivers\etc\hosts file on all application servers. For those who would like to learn more on SafePeak architecture and how it works, I suggest to read this vendor’s webpage: SafePeak Architecture. More interesting new features in SafePeak 2.1 In my previous review of SafePeak new I covered the first 4 things I noticed in the new SafePeak (check out my article “SQLAuthority News – SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration”): Cache setup and fine-tuning – a critical part for getting good caching results Database templates Choosing which database to cache Monitoring and analysis options by SafePeak Since then I had a chance to play with SafePeak some more and here is what I found. 5. Analysis of SQL Performance (present and history): In SafePeak v.2.1 the tools for understanding of performance became more comprehensive. Every 15 minutes SafePeak creates and updates various performance statistics. Each query (or a procedure execute) that arrives to SafePeak gets a SQL pattern, and after it is used again there are statistics for such pattern. An important part of this product is that it understands the dependencies of every pattern (list of tables, views, user defined functions and procs). From this understanding SafePeak creates important analysis information on performance of every object: response time from the database, response time from SafePeak cache, average response time, percent of traffic and break down of behavior. One of the interesting things this behavior column shows is how often the object is actually pdated. The break down analysis allows knowing the above information for: queries and procedures, tables, views, databases and even instances level. The data is show now on all arriving queries, both read queries (that can be cached), but also any types of updates like DMLs, DDLs, DCLs, and even session settings queries. The stats are being updated every 15 minutes and SafePeak dashboard allows going back in time and investigating what happened within any time frame. 6. Logon trigger, for making sure nothing corrupts SafePeak cache data If you have an application with many parts, many servers many possible locations that can actually update the database, or the SQL Server is accessible to many DBAs or software engineers, each can access some database directly and do some changes without going thru SafePeak – this can create a potential corruption of the data stored in SafePeak cache. To make sure SafePeak cache is correct it needs to get all updates to arrive to SafePeak, and if a DBA will access the database directly and do some changes, for example, then SafePeak will simply not know about it and will not clean SafePeak cache. In the new version, SafePeak brought a new feature called “Logon Trigger” to solve the above challenge. By special click of a button SafePeak can deploy a special server logon trigger (with a CLR object) on your SQL Server that actually monitors all connections and informs SafePeak on any connection that is coming not from SafePeak. In SafePeak dashboard there is an interface that allows to control which logins can be ignored based on login names and IPs, while the rest will invoke cache cleanup of SafePeak and actually locks SafePeak cache until this connection will not be closed. Important to note, that this does not interrupt any logins, only informs SafePeak on such connection. On the Dashboard screen in SafePeak you will be able to see those connections and then decide what to do with them. Configuration of this feature in SafePeak dashboard can be done here: Settings -> SQL instances management -> click on instance -> Logon Trigger tab. Other features: 7. User management ability to grant permissions to someone without changing its configuration and only use SafePeak as performance analysis tool. 8. Better reports for analysis of performance using 15 minute resolution charts. 9. Caching of client cursors 10. Support for IPv6 Summary SafePeak is a great SQL Server performance acceleration solution for users who want immediate results for sites with performance, scalability and peak spikes challenges. Especially if your apps are packaged or 3rd party, since no code changes are done. SafePeak can significantly increase response times, by reducing network roundtrip to the database, decreasing CPU resource usage, eliminating I/O and storage access. SafePeak team provides a free fully functional trial www.safepeak.com/download and actually provides a one-on-one assistance during such trial. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 3

    - by rajbk
    We continue building our report in this three part series. Creating an ASP.NET report using Visual Studio 2010 - Part 1 Creating an ASP.NET report using Visual Studio 2010 - Part 2 Adding the ReportViewer control and filter drop downs. Open the source code for index.aspx and add a ScriptManager control. This control is required for the ReportViewer control. Add a DropDownList for the categories and suppliers. Add the ReportViewer control. The markup after these steps is shown below. <div> <asp:ScriptManager ID="smScriptManager" runat="server"> </asp:ScriptManager> <div id="searchFilter"> Filter by: Category : <asp:DropDownList ID="ddlCategories" runat="server" /> and Supplier : <asp:DropDownList ID="ddlSuppliers" runat="server" /> </div> <rsweb:ReportViewer ID="rvProducts" runat="server"> </rsweb:ReportViewer> </div> The design view for index.aspx is shown below. The dropdowns will display the categories and suppliers in the database. Changing the selection in the drop downs will cause the report to be filtered by the selections in the dropdowns. You will see how to do this in the next steps.   Attaching the RDLC to the ReportViewer control by clicking on the top right of the control, going to Report Viewer tasks and selecting Products.rdlc.   Resize the ReportViewer control by dragging at the bottom right corner. I set mine to 800px x 500px. You can also set this value in source view. Defining the data sources. We will now define the Data Source used to populate the report. Go back to the “ReportViewer Tasks” and select “Choose Data Sources” Select a “New data source..” Select “Object” and name your Data Source ID “odsProducts”   In the next screen, choose “ProductRepository” as your business object. Choose “GetProductsProjected” in the next screen.   The method requires a SupplierID and CategoryID. We will set these so that our data source gets the values from the drop down lists we defined earlier. Set the parameter source to be of type “Control” and set the ControlIDs to be ddlSuppliers and ddlCategories respectively. Your screen will look like this: We are now going to define the data source for our drop downs. Select the ddlCategory drop down and pick “Choose Data Source”. Pick “Object” and give it an id “odsCategories”   In the next screen, choose “ProductRepository” Select the GetCategories() method in the next screen.   Select “CategoryName” and “CategoryID” in the next screen. We are done defining the data source for the Category drop down. Perform the same steps for the Suppliers drop down.   Select each dropdown and set the AppendDataBoundItems to true and AutoPostback to true.     The AppendDataBoundItems is needed because we are going to insert an “All“ list item with a value of empty. Go to each drop down and add this list item markup as shown below> Finally, double click on each drop down in the designer and add the following code in the code behind. This along with the “Autopostback= true” attribute refreshes the report anytime a drop down is changed. protected void ddlCategories_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); }   protected void ddlSuppliers_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); } Compile your report and run the page. You should see the report rendered. Note that the tool bar in the ReportViewer control gives you a couple of options including the ability to export the data to Excel, PDF or word.   Conclusion Through this three part series, we did the following: Created a data layer for use by our RDLC. Created an RDLC using the report wizard and define a dataset for the report. Used the report design surface to design our report including adding a chart. Used the ReportViewer control to attach the RDLC. Connected our ReportWiewer to a data source and take parameter values from the drop downlists. Used AutoPostBack to refresh the reports when the dropdown selection was changed. RDLCs allow you to create interactive reports including drill downs and grouping. For even more advanced reports you can use Microsoft® SQL Server™ Reporting Services with RDLs. With RDLs, the report is rendered on the report server instead of the web server. Another nice thing about RDLs is that you can define a parameter list for the report and it gets rendered automatically for you. RDLCs and RDLs both have their advantages and its best to compare them and choose the right one for your requirements. Download VS2010 RTM Sample project NorthwindReports.zip   Alfred Borden: Are you watching closely?

    Read the article

  • Test Drive Windows 7 Online with Virtual Labs

    - by Matthew Guay
    Did you miss out on the Windows 7 public beta and want to try it out before you actually make the leap and upgrade? Maybe you want to learn how to deploy new features in a business environment. Here’s how you can test drive Windows 7 directly from your browser. Whether you manage 10,000 desktops or simply manage your own laptop, it’s usually best to test out a new OS before installing it.  If you’re upgrading from Windows XP you may find many things unfamiliar.  Microsoft has setup a special Windows 7 Test Drive website with resources to help IT professionals test and deploy Windows 7 in their workplaces.  This is a great resource to try out Windows 7 from the comfort of your browser, and look at some of the new features without even installing it. Please note that the online version is not nearly as responsive as a full standard install of Windows 7.  It also does not run the full Aero interface or desktop effects, and may refresh slowly depending on your Internet connection.  So don’t judge Windows 7’s performance based on this virtual lab, but use it as a way to learn more about Windows 7 without installing it. Getting Started To test drive Windows 7, visit Microsoft’s Windows 7 Test Drive website (link below).  You will need to run the Windows 7 Test Drive in Internet Explorer, as it requires Active X support.  We received this error when attempting to run the Test Drive in Firefox: Now, click the “Take a Test Drive” link on the bottom left of the page. This site includes several test drives to demonstrate different features of Windows 7 and its related ecosystem of products including Windows Server 2008 R2, some of which, including the XP Mode test drive, are not yet ready.  For this test, we selected the MED-V Test drive, as this includes Office 2007 and 2010 so you can test them in Windows 7 as well.  Simply select the test drive you want, and click “Try it now!”   If you haven’t run a Windows test drive before, you will be asked to install an ActiveX control.  Click the link to install. Click the yellow bar at the top of the page in Internet Explorer, and select to Install the add-on.  You may have to approve a UAC prompt to finish the install. Once this is finished, click the link on the bottom of the page to return to your test drive.  The test drive page should automatically refresh; if it doesn’t, click refresh to reload it. Now the test drive will load the components.   Once its fully loaded, click the link to launch Windows 7 in a new window. You may see a prompt warning that the server may have been impersonated.  Simply click Yes to proceed. The test lab will give you some getting started directions; click Close Window when you’re ready to try out Windows 7. Here’s the default desktop in the Windows 7 test drive.  You can use it just like a normal Windows computer, but do note that it may function slowly depending on your internet connection.   This test drive includes both Office 2007 and Office 2010 Tech Preview, so you can try out both in Windows 7 as well. You can try out the new Windows 7 applications such as the reworked Paint with the Ribbon interface from Office. Or you can even test the newest version of Media Center, though it will warn you that it may not function good with the down-scaled graphics in the test drive.   Most importantly, you can try out the new features in Windows 7, such as Jumplists and even Aero Snap.  Once again, these features will not function the quickest, but it does let you test them out. While working with the Virtual Lab, there are different tasks it walks you through. You can also download a copy of the lab manual in PDF format to help you navigate through the various objectives. The test drive system is running Microsoft Forefront Security, the enterprise security solution from which Microsoft Security Essentials has adapted components from. Conclusion These virtual labs are great for tech students, or those of you who want to get a first-hand trial of the new features. Also, if you’re not sure on how to deploy something and want to practice in a virtual environment, these labs are quite valuable.While these labs are geared toward IT professionals, it’s a good way for anyone to try out Windows 7 features from the comfort of your current computer. Test Drive Windows 7 Similar Articles Productive Geek Tips Mount Multiple ISO Images Using Virtual CloneDriveHow To Delete a VHD in Windows 7Keyboard Shortcuts for VMware WorkstationMount an ISO image in Windows 7 or VistaHow To Turn a Physical Computer Into A Virtual Machine with Disk2vhd TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 If it were only this easy SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver

    Read the article

  • Interview with Tomas Ulin at the MySQL Innovation Day

    - by Monica Kumar
    MySQL Innovation Day held on June 5, 2012 was a great event for the MySQL engineers, users and customers to gather, share and network. I was able to get a few minutes with Tomas Ulin, Vice President of MySQL Engineering at Oracle, to ask him some questions. Here are the highlights of my interview with Tomas. Monica: This was the first MySQL Innovation Day, correct?  Why now, what was the strategy behind hosting this kind of event? Tomas: In the last year, we have rolled out an incredible number of MySQL events worldwide – some targeted at developers that are new to MySQL and others for the MySQL savvy. At the MySQL Innovation Day, our first event of this kind,, we had a number of our key engineers presenting lightning talks delivering previews of key new features as well as discussing roadmap. Our goal is to keep an open dialogue with the MySQL community. In fact, we are hosting a two-day conference, another first, for the MySQL community called MySQL Connect on Sept. 29-30 in San Francisco. If you attended the MySQL Innovation Day and liked what we did, you are going to love MySQL Connect. We’ll have a lot more of our engineers and many users and community members presenting hour long sessions and hands on labs. Our engineers will be presenting new MySQL features as well offer previews of upcoming enhancements. Monica: What's the big take-away from today's MySQL Innovation Day? Tomas: I hope the most important takeaway for attendees was to see that Oracle has been driving, and continues to drive MySQL innovation with a steady stream of new great GA and Development Milestone releases. Monica: What were attendees most interested in? What feedback did they have? Tomas: Feedback from attendees was incredibly positive and encouraging. In particular, they liked the interaction with the MySQL engineers and were also excited about the new early access features in MySQL 5.6 and MySQL Cluster 7.3. In addition, sessions delivered by MySQL users like Facebook, Pinterest and Twitter were very well received. For example, Pinterest talked about using MySQL to scale from 0 to billions of page views/month, Twitter talked about “Scaling twitter with MySQL” and Facebook discussed the many options to implement MySQL master failover solutions. The presentations are already available for download while some of the session videos will be made available on the MySQL Innovation Day web page shortly. Monica: How would you distinguish the use of MySQL vs. Oracle Database? What key factors should customers consider? Tomas: MySQL and Oracle Database complement each other. They are very different products, best suited to different use cases. Customers can choose world-class solutions from Oracle to fulfill a variety of needs. MySQL is a great choice for enterprise web-based, custom and embedded apps. Oracle Database is the leading choice for enterprise packaged applications such as ERP, CRM as well as high-end data warehousing and business intelligence applications. Monica: What are the highlights of the current MySQL 5.6 Development Milestone Release and early access features for MySQL Cluster 7.3? Tomas: MySQL 5.6 development milestone release builds on MySQL 5.5 by improving: Optimizer for better Performance, Scalability Performance Schema for better instrumentation InnoDB for better transactional throughput Replication for higher availability, data integrity NoSQL options for more flexibility We announced some new early access features in MySQL 5.6, including binary log group commit. We also announced early access features in MySQL Cluster 7.3 including support for foreign key constraints. Monica: How do people get these releases? Tomas: You can access development milestone releases by going to: http://dev.mysql.com/downloads/mysqlThen select the “Development Release” tab. The MySQL Cluster 7.3 and other early access features can be downloaded at: http://labs.mysql.com Monica: What's coming up next for MySQL? Tomas: Our development team is working in overdrive, cranking out new features with community feedback. Don’t miss the MySQL Connect conference being held in San Francisco on Sept. 29 and 30th. My team and I will be there. I hope you can join us! Monica: Thank you for your time, Tomas. I look forward to seeing you at the MySQL Connect conference. To our followers, I hope you found this interview informative. I welcome your comments. Please stay tuned here for more updates on MySQL. Note: Monica Kumar is Senior Director of product marketing for Linux, Virtualization and MySQL at Oracle.

    Read the article

< Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >