Search Results

Search found 17847 results on 714 pages for 'virtual disk'.

Page 195/714 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Windows 2008 Unknown Disks

    - by Ailbe
    I have a BL460c G7 blade server with OS Windows 2008 R2 SP1. This is a brand new C7000 enclosure, with FlexFabric interconnects. I got my FC switches setup and zoned properly to our Clariion CX4, and can see all the hosts that are assigned FCoE HBAs on both paths in both Navisphere and in HP Virtual Connect Manager. So I went ahead and created a storage group for a test server, assigned the appropriate host, assigned the LUN to the server. So far so good, log onto server and I can see 4 unknown disks.... No problem, I install MS MPIO, no luck, can't initialize the disks, and the multiple disks don't go away. Still no problem, I install PowerPath version 5.5 reboot. Now I see 3 disks. One is initialized and ready to go, but I still have 2 disks that I can't initialize, can't offline, can't delete. If I right click in storage manager and go to properties I can see that the MS MPIO tab, but I can't make a path active. I want to get rid of these phantom disks, but so far nothing is working and google searches are showing up some odd results, so obviously I'm not framing my question right. I thought I'd ask here real quick. Does anyone know a quick way to get rid of these unknown disks. Another question, do I need the MPIO feature installed if I have PowerPath installed? This is my first time installing Windows 2008 R2 in this fashion and I'm not sure if that feature is needed or not right now. So some more information to add to this. It seems I'm dealing with more of a Windows issue than anything else. I removed the LUN from the server, uninstalled PowerPath completely, removed the MPIO feature from the server, and rebooted twice. Now I am back to the original 4 Unknown Disks (plus the local Disk 0 containing the OS partition of course, which is working fine) I went to diskpart, I could see all 4 Unknown disks, I selected each disk, ran clean (just in case i'd somehow brought them online previously as GPT and didn't realize it) After a few minutes I was no longer able to see the disks when I ran list disk. However, the disks are still in Disk Management. When I try and offline the disks from Disk Management I get an error: Virtual Disk Manager - The system cannot find the file specified. Accompanied by an error in System Event Logs: Log Name: System Source: Virtual Disk Service Date: 6/25/2012 4:02:01 PM Event ID: 1 Task Category: None Level: Error Keywords: Classic User: N/A Computer: hostname.local Description: Unexpected failure. Error code: 2@02000018 Event Xml: 1 2 0 0x80000000000000 4239 System hostname.local 2@02000018 I feel sure there is a place I can go in the Registry to get rid of these, I just can't recall where and I am loathe to experiement. So to recap, there are currently no LUNS attached at all, I still have the phantom disks, and I'm getting The system cannot find the file specified from Virtual Disk Manager when I try to take them offline. Thanks!

    Read the article

  • Kernel panic when bringing up DRBD resource

    - by sc.
    I'm trying to set up two machines synchonizing with DRBD. The storage is setup as follows: PV - LVM - DRBD - CLVM - GFS2. DRBD is set up in dual primary mode. The first server is set up and running fine in primary mode. The drives on the first server have data on them. I've set up the second server and I'm trying to bring up the DRBD resources. I created all the base LVM's to match the first server. After initializing the resources with `` drbdadm create-md storage I'm bringing up the resources by issuing drbdadm up storage After issuing that command, I get a kernel panic and the server reboots in 30 seconds. Here's a screen capture. My configuration is as follows: OS: CentOS 6 uname -a Linux host.structuralcomponents.net 2.6.32-279.5.2.el6.x86_64 #1 SMP Fri Aug 24 01:07:11 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux rpm -qa | grep drbd kmod-drbd84-8.4.1-2.el6.elrepo.x86_64 drbd84-utils-8.4.1-2.el6.elrepo.x86_64 cat /etc/drbd.d/global_common.conf global { usage-count yes; # minor-count dialog-refresh disable-ip-verification } common { handlers { pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b /proc/sysrq-trigger ; reboot -f"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b /proc/sysrq-trigger ; reboot -f"; local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o /proc/sysrq-trigger ; halt -f"; # fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; # split-brain "/usr/lib/drbd/notify-split-brain.sh root"; # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; } startup { # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb become-primary-on both; wfc-timeout 30; degr-wfc-timeout 10; outdated-wfc-timeout 10; } options { # cpu-mask on-no-data-accessible } disk { # size max-bio-bvecs on-io-error fencing disk-barrier disk-flushes # disk-drain md-flushes resync-rate resync-after al-extents # c-plan-ahead c-delay-target c-fill-target c-max-rate # c-min-rate disk-timeout } net { # protocol timeout max-epoch-size max-buffers unplug-watermark # connect-int ping-int sndbuf-size rcvbuf-size ko-count # allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri # after-sb-1pri after-sb-2pri always-asbp rr-conflict # ping-timeout data-integrity-alg tcp-cork on-congestion # congestion-fill congestion-extents csums-alg verify-alg # use-rle protocol C; allow-two-primaries yes; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect; } } cat /etc/drbd.d/storage.res resource storage { device /dev/drbd0; meta-disk internal; on host.structuralcomponents.net { address 10.10.1.120:7788; disk /dev/vg_storage/lv_storage; } on host2.structuralcomponents.net { address 10.10.1.121:7788; disk /dev/vg_storage/lv_storage; } /var/log/messages is not logging anything about the crash. I've been trying to find a cause of this but I've come up with nothing. Can anyone help me out? Thanks.

    Read the article

  • Grub 'Read Error' - Only Loads with LiveCD

    - by Ryan Sharp
    Problem After installing Ubuntu to complete my Windows 7/Ubuntu 12.04 dual-boot setup, Grub just wouldn't load at all unless I boot from the LiveCD. Afterwards, everything works completely normal. However, this workaround isn't a solution and I'd like to be able to boot without the aid of a disc. Fdisk -l Using the fdisk -l command, I am given the following: Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x324971d1 Device Boot Start End Blocks Id System /dev/sda1 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 208896 48957439 24374272 7 HPFS/NTFS/exFAT /dev/sda3 * 48959486 124067839 37554177 5 Extended /dev/sda5 48959488 124067839 37554176 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc0ee6a69 Device Boot Start End Blocks Id System /dev/sdb1 1024208894 1953523711 464657409 5 Extended /dev/sdb3 * 2048 1024206847 512102400 7 HPFS/NTFS/exFAT /dev/sdb5 1024208896 1937897471 456844288 83 Linux /dev/sdb6 1937899520 1953523711 7812096 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/sdc: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x292eee23 Device Boot Start End Blocks Id System /dev/sdc1 2048 625141759 312569856 7 HPFS/NTFS/exFAT Bootinfoscript I've used the BootInfoScript, and received the following output: Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== => Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos5)/boot/grub on this drive. => Grub2 (v1.99) is installed in the MBR of /dev/sdb and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos5)/boot/grub on this drive. => Windows is installed in the MBR of /dev/sdc. sda1: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /bootmgr /Boot/BCD sda2: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda3: __________________________________________________________________________ File system: Extended Partition Boot sector type: Unknown Boot sector info: sda5: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 12.04.1 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb1: __________________________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sdb5: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Boot files: sdb6: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: sdb3: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: According to the info in the boot sector, sdb3 starts at sector 200744960. But according to the info from fdisk, sdb3 starts at sector 2048. According to the info in the boot sector, sdb3 has 823461887 sectors, but according to the info from fdisk, it has 1024204799 sectors. Operating System: Boot files: sdc1: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 2,048 206,847 204,800 7 NTFS / exFAT / HPFS /dev/sda2 208,896 48,957,439 48,748,544 7 NTFS / exFAT / HPFS /dev/sda3 * 48,959,486 124,067,839 75,108,354 5 Extended /dev/sda5 48,959,488 124,067,839 75,108,352 83 Linux Drive: sdb _____________________________________________________________________ Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdb1 1,024,208,894 1,953,523,711 929,314,818 5 Extended /dev/sdb5 1,024,208,896 1,937,897,471 913,688,576 83 Linux /dev/sdb6 1,937,899,520 1,953,523,711 15,624,192 82 Linux swap / Solaris /dev/sdb3 * 2,048 1,024,206,847 1,024,204,800 7 NTFS / exFAT / HPFS Drive: sdc _____________________________________________________________________ Disk /dev/sdc: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdc1 2,048 625,141,759 625,139,712 7 NTFS / exFAT / HPFS "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/sda1 A48056DF8056B80E ntfs System Reserved /dev/sda2 A8C6D6A4C6D671D4 ntfs Windows /dev/sda5 fd71c537-3715-44e1-b1fe-07537e22b3dd ext4 /dev/sdb3 6373D03D0A3747A8 ntfs Steam /dev/sdb5 6f5a6eb3-a932-45aa-893e-045b57708270 ext4 /dev/sdb6 469848c8-867a-41b7-b0e1-b813a43c64af swap /dev/sdc1 725D7B961CF34B1B ntfs backup ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sda5 / ext4 (rw,noatime,nodiratime,discard,errors=remount-ro) /dev/sdb5 /home ext4 (rw) =========================== sda5/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd set locale_dir=($root)/boot/grub/locale set lang=en_GB insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "${linux_gfx_mode}" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.2.0-29-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd linux /boot/vmlinuz-3.2.0-29-generic root=UUID=fd71c537-3715-44e1-b1fe-07537e22b3dd ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-29-generic } menuentry 'Ubuntu, with Linux 3.2.0-29-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd echo 'Loading Linux 3.2.0-29-generic ...' linux /boot/vmlinuz-3.2.0-29-generic root=UUID=fd71c537-3715-44e1-b1fe-07537e22b3dd ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-29-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sda1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root A48056DF8056B80E chainloader +1 } menuentry "Windows 7 (loader) (on /dev/sda2)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root A8C6D6A4C6D671D4 chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- =============================== sda5/etc/fstab: ================================ -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda5 during installation UUID=fd71c537-3715-44e1-b1fe-07537e22b3dd / ext4 noatime,nodiratime,discard,errors=remount-ro 0 1 # /home was on /dev/sdb5 during installation UUID=6f5a6eb3-a932-45aa-893e-045b57708270 /home ext4 defaults 0 2 # swap was on /dev/sdb6 during installation UUID=469848c8-867a-41b7-b0e1-b813a43c64af none swap sw 0 0 tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 -------------------------------------------------------------------------------- =================== sda5: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) = boot/grub/core.img 1 = boot/grub/grub.cfg 1 = boot/initrd.img-3.2.0-29-generic 2 = boot/vmlinuz-3.2.0-29-generic 1 = initrd.img 2 = vmlinuz 1 ======================== Unknown MBRs/Boot Sectors/etc: ======================== Unknown BootLoader on sda3 00000000 63 6f 70 69 61 20 65 20 63 6f 6c 61 41 63 65 64 |copia e colaAced| 00000010 65 72 20 61 20 74 6f 64 6f 20 6f 20 74 65 78 74 |er a todo o text| 00000020 6f 20 66 61 6c 61 64 6f 20 75 74 69 6c 69 7a 61 |o falado utiliza| 00000030 6e 64 6f 20 61 20 63 6f 6e 76 65 72 73 c3 a3 6f |ndo a convers..o| 00000040 20 64 65 20 74 65 78 74 6f 20 70 61 72 61 20 76 | de texto para v| 00000050 6f 7a 4d 61 6e 69 70 75 6c 61 72 20 61 73 20 64 |ozManipular as d| 00000060 65 66 69 6e 69 c3 a7 c3 b5 65 73 20 71 75 65 20 |efini....es que | 00000070 63 6f 6e 74 72 6f 6c 61 6d 20 6f 20 61 63 65 73 |controlam o aces| 00000080 73 6f 20 64 65 20 57 65 62 73 69 74 65 73 20 61 |so de Websites a| 00000090 20 63 6f 6f 6b 69 65 73 2c 20 4a 61 76 61 53 63 | cookies, JavaSc| 000000a0 72 69 70 74 20 65 20 70 6c 75 67 2d 69 6e 73 4d |ript e plug-insM| 000000b0 61 6e 69 70 75 6c 61 72 20 61 73 20 64 65 66 69 |anipular as defi| 000000c0 6e 69 c3 a7 c3 b5 65 73 20 72 65 6c 61 63 69 6f |ni....es relacio| 000000d0 6e 61 64 61 73 20 63 6f 6d 20 70 72 69 76 61 63 |nadas com privac| 000000e0 69 64 61 64 65 41 63 65 64 65 72 20 61 6f 73 20 |idadeAceder aos | 000000f0 73 65 75 73 20 70 65 72 69 66 c3 a9 72 69 63 6f |seus perif..rico| 00000100 73 20 55 53 42 55 74 69 6c 69 7a 61 72 20 6f 20 |s USBUtilizar o | 00000110 73 65 75 20 6d 69 63 72 6f 66 6f 6e 65 55 74 69 |seu microfoneUti| 00000120 6c 69 7a 61 72 20 61 20 73 75 61 20 63 c3 a2 6d |lizar a sua c..m| 00000130 61 72 61 55 74 69 6c 69 7a 61 72 20 6f 20 73 65 |araUtilizar o se| 00000140 75 20 6d 69 63 72 6f 66 6f 6e 65 20 65 20 61 20 |u microfone e a | 00000150 63 c3 a2 6d 61 72 61 4e c3 a3 6f 20 66 6f 69 20 |c..maraN..o foi | 00000160 70 6f 73 73 c3 ad 76 65 6c 20 65 6e 63 6f 6e 74 |poss..vel encont| 00000170 72 61 72 20 6f 20 63 61 6d 69 6e 68 6f 20 61 62 |rar o caminho ab| 00000180 73 6f 6c 75 74 6f 20 70 61 72 61 20 6f 20 64 69 |soluto para o di| 00000190 72 65 63 74 c3 b3 72 69 6f 20 61 20 65 6d 70 61 |rect..rio a empa| 000001a0 63 6f 74 61 72 2e 4f 20 64 69 72 65 63 74 c3 b3 |cotar.O direct..| 000001b0 72 69 6f 20 64 65 20 65 6e 74 72 61 64 61 00 fe |rio de entrada..| 000001c0 ff ff 83 fe ff ff 02 00 00 00 00 10 7a 04 00 00 |............z...| 000001d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.| 00000200 =============================== StdErr Messages: =============================== xz: (stdin): Compressed data is corrupt xz: (stdin): Compressed data is corrupt awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in Begging / Appreciation ;) If anything else is required to solve my problem, please ask. My only hopes are that I can solve this, and that doing so won't require re-installation of Grub due to how complicated the procedures are, or that I would be needed to reinstall the OS', as I have done so about six times already since friday due to several other issues I've encountered. Thank you, and good day. System Ubuntu 12.04 64-bit / Windows 7 SP1 64-bit 64GB SSD as boot/OS drive, 1TB HDD as /Home Swap and Steam drive.

    Read the article

  • C# XNA Handle mouse events?

    - by user406470
    I'm making a 2D game engine called Clixel over on GitHub. The problem I have relates to two classes, ClxMouse and ClxButton. In it I have a mouse class - the code for that can be viewed here. ClxMouse using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; namespace org.clixel { public class ClxMouse : ClxSprite { private MouseState _curmouse, _lastmouse; public int Sensitivity = 3; public bool Lock = true; public Vector2 Change { get { return new Vector2(_curmouse.X - _lastmouse.X, _curmouse.Y - _lastmouse.Y); } } private int _scrollwheel; public int ScrollWheel { get { return _scrollwheel; } } public bool LeftDown { get { if (_curmouse.LeftButton == ButtonState.Pressed) return true; else return false; } } public bool RightDown { get { if (_curmouse.RightButton == ButtonState.Pressed) return true; else return false; } } public bool MiddleDown { get { if (_curmouse.MiddleButton == ButtonState.Pressed) return true; else return false; } } public bool LeftPressed { get { if (_curmouse.LeftButton == ButtonState.Pressed && _lastmouse.LeftButton == ButtonState.Released) return true; else return false; } } public bool RightPressed { get { if (_curmouse.RightButton == ButtonState.Pressed && _lastmouse.RightButton == ButtonState.Released) return true; else return false; } } public bool MiddlePressed { get { if (_curmouse.MiddleButton == ButtonState.Pressed && _lastmouse.MiddleButton == ButtonState.Released) return true; else return false; } } public bool LeftReleased { get { if (_curmouse.LeftButton == ButtonState.Released && _lastmouse.LeftButton == ButtonState.Pressed) return true; else return false; } } public bool RightReleased { get { if (_curmouse.RightButton == ButtonState.Released && _lastmouse.RightButton == ButtonState.Pressed) return true; else return false; } } public bool MiddleReleased { get { if (_curmouse.MiddleButton == ButtonState.Released && _lastmouse.MiddleButton == ButtonState.Pressed) return true; else return false; } } public MouseState CurMouse { get { return _curmouse; } } public MouseState LastMouse { get { return _lastmouse; } } public ClxMouse() : base(ClxG.Textures.Default.Cursor) { _curmouse = Mouse.GetState(); _lastmouse = _curmouse; CollisionBox = new Rectangle(ClxG.Screen.Center.X, ClxG.Screen.Center.Y, Texture.Width, Texture.Height); this.Solid = false; DefaultPosition = new Vector2(CollisionBox.X, CollisionBox.Y); Mouse.SetPosition(CollisionBox.X, CollisionBox.Y); } public ClxMouse(Texture2D _texture) : base(_texture) { _curmouse = Mouse.GetState(); _lastmouse = _curmouse; CollisionBox = new Rectangle(ClxG.Screen.Center.X, ClxG.Screen.Center.Y, Texture.Width, Texture.Height); DefaultPosition = new Vector2(CollisionBox.X, CollisionBox.Y); } public override void Update() { _lastmouse = _curmouse; _curmouse = Mouse.GetState(); if (_curmouse != _lastmouse) { if (ClxG.Game.IsActive) { _scrollwheel = _curmouse.ScrollWheelValue; Velocity = new Vector2(Change.X / Sensitivity, Change.Y / Sensitivity); if (Lock) Mouse.SetPosition(ClxG.Screen.Center.X, ClxG.Screen.Center.Y); _curmouse = Mouse.GetState(); } base.Update(); } } public override void Draw(SpriteBatch _sb) { base.Draw(_sb); } } } ClxButton using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; namespace org.clixel { public class ClxButton : ClxSprite { /// <summary> /// The color when the mouse is over the button /// </summary> public Color HoverColor; /// <summary> /// The color when the color is being clicked /// </summary> public Color ClickColor; /// <summary> /// The color when the button is inactive /// </summary> public Color InactiveColor; /// <summary> /// The color when the button is active /// </summary> public Color ActiveColor; /// <summary> /// The color after the button has been clicked. /// </summary> public Color ClickedColor; /// <summary> /// The text to be displayed on the button, set to "" if no text is needed. /// </summary> public string Text; /// <summary> /// The ClxText object to be displayed. /// </summary> public ClxText TextRender; /// <summary> /// The ClxState that should be ResetAndShow() when the button is clicked. /// </summary> public ClxState ClickState; /// <summary> /// Collision check to make sure onCollide() only runs once per frame, /// since only the mouse needs to be collision checked. /// </summary> private bool _runonce = false; /// <summary> /// Gets a value indicating whether this instance is colliding. /// </summary> /// <value> /// <c>true</c> if this instance is colliding; otherwise, <c>false</c>. /// </value> public bool IsColliding { get { return _runonce; } } /// <summary> /// Initializes a new instance of the <see cref="ClxButton"/> class. /// </summary> public ClxButton() : base(ClxG.Textures.Default.Button) { HoverColor = Color.Red; ClickColor = Color.Blue; InactiveColor = Color.Gray; ActiveColor = Color.White; ClickedColor = Color.Yellow; Text = Name + ID + " Unset!"; TextRender = new ClxText(); TextRender.Text = Text; TextRender.TextPadding = new Vector2(5, 5); ClickState = null; CollideObjects(ClxG.Mouse); } /// <summary> /// Initializes a new instance of the <see cref="ClxButton"/> class. /// </summary> /// <param name="_texture">The button texture.</param> public ClxButton(Texture2D _texture) : base(_texture) { HoverColor = Color.Red; ClickColor = Color.Blue; InactiveColor = Color.Gray; ActiveColor = Color.White; ClickedColor = Color.Yellow; Texture = _texture; Text = Name + ID; TextRender = new ClxText(); TextRender.Name = this.Name + ".TextRender"; TextRender.Text = Text; TextRender.TextPadding = new Vector2(5, 5); TextRender.Reset(); ClickState = null; CollideObjects(ClxG.Mouse); } /// <summary> /// Draws the debug information, run from ClxG.DrawDebug unless manual control is assumed. /// </summary> /// <param name="_sb">SpriteBatch used for drawing.</param> public override void DrawDebug(SpriteBatch _sb) { _runonce = false; TextRender.DrawDebug(_sb); _sb.Draw(Texture, ActualRectangle, new Rectangle(0, 0, Texture.Width, Texture.Height), DebugColor, Rotation, Origin, Flip, Layer); _sb.Draw(ClxG.Textures.Default.DebugBG, new Rectangle(ActualRectangle.X - DebugLineWidth, ActualRectangle.Y - DebugLineWidth, ActualRectangle.Width + DebugLineWidth * 2, ActualRectangle.Height + DebugLineWidth * 2), new Rectangle(0, 0, ClxG.Textures.Default.DebugBG.Width, ClxG.Textures.Default.DebugBG.Height), DebugOutline, Rotation, Origin, Flip, Layer - 0.1f); _sb.Draw(ClxG.Textures.Default.DebugBG, ActualRectangle, new Rectangle(0, 0, ClxG.Textures.Default.DebugBG.Width, ClxG.Textures.Default.DebugBG.Height), DebugBGColor, Rotation, Origin, Flip, Layer - 0.01f); } /// <summary> /// Draws using the SpriteBatch, run from ClxG.Draw unless manual control is assumed. /// </summary> /// <param name="_sb">SpriteBatch used for drawing.</param> public override void Draw(SpriteBatch _sb) { _runonce = false; TextRender.Draw(_sb); if (Visible) if (Debug) { DrawDebug(_sb); } else _sb.Draw(Texture, ActualRectangle, new Rectangle(0, 0, Texture.Width, Texture.Height), Color, Rotation, Origin, Flip, Layer); } /// <summary> /// Updates this instance. /// </summary> public override void Update() { if (this.Color != ActiveColor) this.Color = ActiveColor; TextRender.Layer = this.Layer + 0.03f; TextRender.Text = Text; TextRender.Scale = .5f; TextRender.Name = this.Name + ".TextRender"; TextRender.Origin = new Vector2(TextRender.CollisionBox.Center.X, TextRender.CollisionBox.Center.Y); TextRender.Center(this); TextRender.Update(); this.CollisionBox.Width = (int)(TextRender.CollisionBox.Width * TextRender.Scale) + (int)(TextRender.TextPadding.X * 2); this.CollisionBox.Height = (int)(TextRender.CollisionBox.Height * TextRender.Scale) + (int)(TextRender.TextPadding.Y * 2); base.Update(); } /// <summary> /// Collide event, takes the colliding object to call it's proper collision code. /// You'd want to use something like if(typeof(collider) == typeof(ClxObject) /// </summary> /// <param name="collider">The colliding object.</param> public override void onCollide(ClxObject collider) { if (!_runonce) { _runonce = true; UpdateEvents(); base.onCollide(collider); } } /// <summary> /// Updates the mouse based events. /// </summary> public void UpdateEvents() { onHover(); if (ClxG.Mouse.LeftReleased) { onLeftReleased(); return; } if (ClxG.Mouse.RightReleased) { onRightReleased(); return; } if (ClxG.Mouse.MiddleReleased) { onMiddleReleased(); return; } if (ClxG.Mouse.LeftPressed) { onLeftClicked(); return; } if (ClxG.Mouse.RightPressed) { onRightClicked(); return; } if (ClxG.Mouse.MiddlePressed) { onMiddleClicked(); return; } if (ClxG.Mouse.LeftDown) { onLeftClick(); return; } if (ClxG.Mouse.RightDown) { onRightClick(); return; } if (ClxG.Mouse.MiddleDown) { onMiddleClick(); return; } } /// <summary> /// Shows the state of the click. /// </summary> public void ShowClickState() { if (ClickState != null) { ClickState.ResetAndShow(); } } /// <summary> /// Hover event /// </summary> virtual public void onHover() { this.Color = HoverColor; } /// <summary> /// Left click event /// </summary> virtual public void onLeftClick() { this.Color = ClickColor; } /// <summary> /// Right click event /// </summary> virtual public void onRightClick() { } /// <summary> /// Middle click event /// </summary> virtual public void onMiddleClick() { } /// <summary> /// Left click event, called once per click /// </summary> virtual public void onLeftClicked() { ShowClickState(); } /// <summary> /// Right click event, called once per click /// </summary> virtual public void onRightClicked() { this.Reset(); } /// <summary> /// Middle click event, called once per click /// </summary> virtual public void onMiddleClicked() { } /// <summary> /// Ons the left released. /// </summary> virtual public void onLeftReleased() { this.Color = ClickedColor; } virtual public void onRightReleased() { } virtual public void onMiddleReleased() { } } } The issue I have is that I have all these have event styled methods, especially in ClxButton with all the onLeftClick, onRightClick, etc, etc. Is there a better way for me to handle these events to be a lot more easier for a programmer to use? I was looking at normal events on some other sites, (I'd post them but I need more rep.) and didn't really see a good way to implement delegate events into my framework. I'm not really sure how these events work, could someone possibly lay out how these events are processed for me? TL:DR * Is there a better way to handle events like this? * Are events a viable solution to this problem? Thanks in advance for any help.

    Read the article

  • How to disable Mac OS X from using swap when there still is "Inactive" memory?

    - by Motin
    A common phenomena in my day to day usage (and several other's according to various posts throughout the internet) of OS X, the system seems to become slow whenever there is no more "Free" memory available. Supposedly, this is due to swapping, since heavy disk activity is apparent and that vm_stat reports many pageouts. (Correct me from wrong) However, the amount of "Inactive" ram is typically around 12.5%-25% of all available memory (^1.) when swapping starts/occurs/ends. According to http://support.apple.com/kb/ht1342 : Inactive memory This information in memory is not actively being used, but was recently used. For example, if you've been using Mail and then quit it, the RAM that Mail was using is marked as Inactive memory. This Inactive memory is available for use by another application, just like Free memory. However, if you open Mail before its Inactive memory is used by a different application, Mail will open quicker because its Inactive memory is converted to Active memory, instead of loading Mail from the slower hard disk. And according to http://developer.apple.com/library/mac/#documentation/Performance/Conceptual/ManagingMemory/Articles/AboutMemory.html : The inactive list contains pages that are currently resident in physical memory but have not been accessed recently. These pages contain valid data but may be released from memory at any time. So, basically: When a program has quit, it's memory becomes marked as Inactive and should be claimable at any time. Still, OS X will prefer to start swapping out memory to the Swap file instead of just claiming this memory, whenever the "Free" memory gets to low. Why? What is the advantage of this behavior over, say, instantly releasing Inactive memory and not even touch the swap file? Some sources (^2.) indicate that OS X would page out the "Inactive" memory to swap before releasing it, but that doesn't make sense now does it if the memory may be released from memory at any time? Swapping is expensive, releasing is cheap, right? Can this behavior be changed using some preference or known hack? (Preferably one that doesn't include disabling swap/dynamic_pager altogether and restarting...) I do appreciate the purge command, as well as the concept of Repairing disk permissions to force some Free memory, but those are ways to painfully force more Free memory than to actually fixing the swap/release decision logic... Btw a similar question was asked here: http://forums.macnn.com/90/mac-os-x/434650/why-does-os-x-swap-when/ and here: http://hintsforums.macworld.com/showthread.php?t=87688 but even though the OPs re-asked the core question, none of the replies addresses an answer to it... ^1. UPDATE 17-mar-2012 Since I first posted this question, I have gone from 4gb to 8gb of installed ram, and the problem remains. The amount of "Inactive" ram was 0.5gb-1.0gb before and is now typically around 1.0-2.0GB when swapping starts/occurs/ends, ie it seems that around 12.5%-25% of the ram is preserved as Inactive by osx kernel logic. ^2. For instance http://apple.stackexchange.com/questions/4288/what-does-it-mean-if-i-have-lots-of-inactive-memory-at-the-end-of-a-work-day : Once all your memory is used (free memory is 0), the OS will write out inactive memory to the swapfile to make more room in active memory. UPDATE 17-mar-2012 Here is a round-up of the methods that have been suggested to help so far: The purge command "Used to approximate initial boot conditions with a cold disk buffer cache for performance analysis. It does not affect anonymous memory that has been allocated through malloc, vm_allocate, etc". This is useful to prevent osx to swap-out the disk cache (which is ridiculous that osx actually does so in the first place), but with the downside that the disk cache is released, meaning that if the disk cache was not about to be swapped out, one would simply end up with a cold disk buffer cache, probably affecting performance negatively. The FreeMemory app and/or Repairing disk permissions to force some Free memory Doesn't help releasing any memory, only moving some gigabytes of memory contents from ram to the hd. In the end, this causes lots of swap-ins when I attempt to use the applications that were open while freeing memory, as a lot of its vm is now on swap. Speeding up swap-allocation using dynamicpagerwrapper Seems a good thing to do in order to speed up swap-usage, but does not address the problem of osx swapping in the first place while there is still inactive memory. Disabling swap by disabling dynamicpager and restarting This will force osx not to use swap to the price of the system hanging when all memory is used. Not a viable alternative... Disabling swap using a hacked dynamicpager Similar to disabling dynamicpager above, some excerpts from the comments to the blog post indicate that this is not a viable solution: "The Inactive Memory is high as usual". "when your system is running out of memory, the whole os hangs...", "if you consume the whole amount of memory of the mac, the machine will likely hang" To sum up, I am still unaware of a way of disabling Mac OS X from using swap when there still is "Inactive" memory. If it isn't possible, maybe at least there is an explanation somewhere of why osx prefers to swap out memory that may be released from memory at any time?

    Read the article

  • arp problems with transparent bridge on linux

    - by Mink
    I've been trying to secure my virtual machines on my esx server by putting them behind a transparent bridge with 2 interfaces, one in front, one at the back. My intention is to put all the firewall rules in one place (instead of on each virtual server). I've been using as bridge a blank new virtual machine based on arch linux (but I suspect it doesn't matter which brand of linux it is). What I have is 2 virtual switchs (thus two Virtual Network, VN_front and VN_back), each with 2 types of ports (switched/separated or promiscious/where the machine can see all packets). On my bridge machine, I've set up 2 virtual NIC, one on VN_front, one on VN_back, both in promisc mode. I've created a bridge br0 with both NIC in it: brctl addbr br0 brctl stp br0 off brctl addif br0 front_if brctl addif br0 back_if Then brought them up: ifconfig front_if 0.0.0.0 promisc ifconfig back_if 0.0.0.0 promisc ifconfig br0 0.0.0.0 (I use promisc mode, because I'm not sure I can do without, thinking that maybe the packets don't reach the NICs) Then I took one of my virtual server sitting on VN_front, and plugged it to VN_back instead (that's the nifty use case I'm thinking about, being able to move my servers around just by changing the VN they are plugged into, without changing anything in the configuration). Then I looked into the macs "seen" by my addressless bridge using brctl showmacs br0 and it did show my server from both sides: I get something that looks like this : port no mac addr is local? ageing timer 2 00:0c:29:e1:54:75 no 9.27 1 00:0c:29:fd:86:0c no 9.27 2 00:50:56:90:05:86 no 73.38 1 00:50:56:90:05:88 no 0.10 2 00:50:56:90:05:8b yes 0.00 << FRONT VN 1 00:50:56:90:05:8c yes 0.00 << BACK VN 2 00:50:56:90:19:18 no 13.55 2 00:50:56:90:3c:cf no 13.57 the thing is that the server that are plugged in front/back are not shown on the correct port. I suspect some horrible thing happening in the ARP-world... :-/ If I ping from a front virtual server to a back virtual server, I can only see the back machine if that back machine pings something in the front. As soon as I stop the ping from the back machine, the ping from the front machine stops getting through... I've noticed that if the back machine pings, then its port on the bridge is the correct one... I've tried to play with the arp_ switch of /proc/sys, but with no clear effect on the end result... /proc/sys/net/ipv4/ip_forward doesn't seem to be of any use when using a bridge (seems it's all taken care of by brctl) /proc/sys/net/ipv4/conf//arp_ don't seem to change much either... (tried arp_announce to 2 or 8 - like suggested elsewhere - and arp_ignore to 0 or 1 ) All the examples I've seen have a different subnet on either side like 10.0.1.0/24 and 10.0.2.0/24... In my case I want 10.0.1.0/24 on both side (just like a transparent switch - except it's a hidden fw ). Turning stp on/off doesn't seem to have any impact on my issue. It's as if the arp packets where getting through the bridge, corrupting the other side with false data... I've tried to use the -arp on each interface, br0, front, back... it breaks the thing altogether... I suspect it has something to do with both side being on the same subnet... I've thought about putting all my machine behind the fw, so as to have all the same subnet at the back... but I'm stuck with my provider's gateway standing at the front with part of my subnet (in fact 3 appliance to route the whole subnet), so I'll always have ips from the same subnet on both side, whatever I do... (I'm using fixed front IPs on my delegated subnet). I'm at a loss... -_-'' Thx for your help. (As anyone tried something like this? from within ESXi?) (It's not just a stunt, the idea is to have something like fail2ban running on some servers, sending their banned IP to the bridge/fw so that it too could ban them - saving all the other servers from that same attacker in one go, allowing for some honeypot that would trigger the fw from any kind of suitable response, and stuffs of the sort... I am aware I could use something like snort, but it addresses some completely different kind of problems, in a completely different way... )

    Read the article

  • Can't update scala on Gentoo

    - by xhochy
    As I wanted to test Scala 2.9.2 on my gentoo system I tried updated the package but ended up with this error. I can't figure out where the problem may be: Calculating dependencies ...... done! >>> Verifying ebuild manifests >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Emerging (1 of 1) dev-lang/scala-2.9.2 >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Failed to emerge dev-lang/scala-2.9.2, Log file: >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log' >>> Jobs: 0 of 1 complete, 1 running Load avg: 0.23, 0.16, 0.20 >>> Jobs: 0 of 1 complete, 1 running, 1 failed Load avg: 0.23, 0.16, 0.20 >>> Jobs: 0 of 1 complete, 1 failed Load avg: 0.23, 0.16, 0.20 * Package: dev-lang/scala-2.9.2 * Repository: gentoo * Maintainer: [email protected] * USE: amd64 elibc_glibc kernel_linux multilib userland_GNU * FEATURES: sandbox [01m[31;06m!!! ERROR: Couldn't find suitable VM. Possible invalid dependency string. Due to jdk-with-com-sun requiring a target of 1.7 but the virtual machines constrained by virtual/jdk-1.6 and/or this package requiring virtual(s) jdk-with-com-sun[0m * Unable to determine VM for building from dependencies: NV_DEPEND: virtual/jdk:1.6 java-virtuals/jdk-with-com-sun !binary? ( dev-java/ant-contrib:0 ) app-arch/xz-utils >=dev-java/java-config-2.1.9-r1 source? ( app-arch/zip ) >=dev-java/ant-core-1.7.0 dev-java/ant-nodeps >=dev-java/javatoolkit-0.3.0-r2 >=dev-lang/python-2.4 * ERROR: dev-lang/scala-2.9.2 failed (setup phase): * Failed to determine VM for building. * * Call stack: * ebuild.sh, line 93: Called pkg_setup * scala-2.9.2.ebuild, line 43: Called java-pkg-2_pkg_setup * java-pkg-2.eclass, line 53: Called java-pkg_init * java-utils-2.eclass, line 2187: Called java-pkg_switch-vm * java-utils-2.eclass, line 2674: Called die * The specific snippet of code: * die "Failed to determine VM for building." * * If you need support, post the output of `emerge --info '=dev-lang/scala-2.9.2'`, * the complete build log and the output of `emerge -pqv '=dev-lang/scala-2.9.2'`. !!! When you file a bug report, please include the following information: GENTOO_VM= CLASSPATH="" JAVA_HOME="" JAVACFLAGS="" COMPILER="" and of course, the output of emerge --info * The complete build log is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log'. * The ebuild environment file is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/die.env'. * Working directory: '/var/tmp/portage/dev-lang/scala-2.9.2' * S: '/var/tmp/portage/dev-lang/scala-2.9.2/work/scala-2.9.2-sources' * Messages for package dev-lang/scala-2.9.2: * Unable to determine VM for building from dependencies: * ERROR: dev-lang/scala-2.9.2 failed (setup phase): * Failed to determine VM for building. * * Call stack: * ebuild.sh, line 93: Called pkg_setup * scala-2.9.2.ebuild, line 43: Called java-pkg-2_pkg_setup * java-pkg-2.eclass, line 53: Called java-pkg_init * java-utils-2.eclass, line 2187: Called java-pkg_switch-vm * java-utils-2.eclass, line 2674: Called die * The specific snippet of code: * die "Failed to determine VM for building." * * If you need support, post the output of `emerge --info '=dev-lang/scala-2.9.2'`, * the complete build log and the output of `emerge -pqv '=dev-lang/scala-2.9.2'`. * The complete build log is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/build.log'. * The ebuild environment file is located at '/var/tmp/portage/dev-lang/scala-2.9.2/temp/die.env'. * Working directory: '/var/tmp/portage/dev-lang/scala-2.9.2' * S: '/var/tmp/portage/dev-lang/scala-2.9.2/work/scala-2.9.2-sources' The following eix output may help: % eix java-virtuals/jdk-with-com-sun [I] java-virtuals/jdk-with-com-sun Available versions: 20111111 {{ELIBC="FreeBSD"}} Installed versions: 20111111(16:08:51 18/04/12)(ELIBC="-FreeBSD") Homepage: http://www.gentoo.org Description: Virtual ebuilds that require internal com.sun classes from a JDK Both virtual jdks 1.6 and 1.7 are installed: % eix virtual/jdk [I] virtual/jdk Available versions: (1.4) ~1.4.2-r1[1] (1.5) 1.5.0 ~1.5.0-r3[1] (1.6) 1.6.0 1.6.0-r1 (1.7) (~)1.7.0 Installed versions: 1.6.0-r1(1.6)(23:22:48 10/11/12) 1.7.0(1.7)(23:21:09 10/11/12) Description: Virtual for JDK [1] "java-overlay" /var/lib/layman/java-overlay

    Read the article

  • As a small business about to overhaul infrastructure and go virtual, how can we take advantage of all the features of the QNAP TS-439 Pro II+?

    - by Sally
    Specifically, how can we benefit from these current list of features? We're very new to this and I want to be able to talk intelligently to our IT consultant. VMware Ready Citrix Ready Built-in iSCSI target service Virtual Disk Drive (via iSCSI Initiator) Remote Replication Multi-LUN per Target LUN Mapping & LUN Masking Support SPC-3 Persistent Reservation Support What other products should we compare this QNAP to? I appreciate how informative the site is, but they only seem to sell their products through a small number of channels. Is QNAP well known? TIA!

    Read the article

  • How to set up VPN connection? Virtual Box 3.1.4 installed. Host - Snow Leopard(Mac) Guest - Windows 7 (32-bit)

    - by user31954
    I have Virtual Box 3.1.4 installed. Host - Snow Leopard(Mac) Guest - Windows 7 (32-bit). I have installed Windows on my MAC because I need it for work. I cannot establish VPN connection (using NAT). I tried to use bridged adapter, and I lost my internet connection on my guest(wind7) completely. I don't know much about networking, so I need detailed instructions for his particular OSs. Could someone please help me with this? Some random details about my attempts: On my host Windows I get error 800 trying to VPN. I can ping server address from my guest Win 7 and I have VPN connection established from my host Mac. I do disable VPN on my Mac when tying to establish it through guest. I tried to VPN from Mac and see if Guest sees it. It doesn't. Thank you!

    Read the article

  • How to set up VPN connection? Virtual Box 3.1.4 installed. Host - Snow Leopard(Mac) Guest - Windows

    - by user31954
    I have Virtual Box 3.1.4 installed. Host - Snow Leopard(Mac) Guest - Windows 7 (32-bit). I have installed Windows on my MAC because I need it for work. I cannot establish VPN connection (using NAT). I tried to use bridged adapter, and I lost my internet connection on my guest(wind7) completely. I don't know much about networking, so I need detailed instructions for his particular OSs. Could someone please help me with this? Some random details about my attempts: On my host Windows I get error 800 trying to VPN. I can ping server address from my guest Win 7 and I have VPN connection established from my host Mac. I do disable VPN on my Mac when tying to establish it through guest. I tried to VPN from Mac and see if Guest sees it. It doesn't. Thank you!

    Read the article

  • Fluent NHibernate Many to one mapping

    - by Jit
    I am new to Hibernate world. It may be a silly question, but I am not able to solve it. I am testing many to One relationship of tables and trying to insert record. I have a Department table and Employee table. Employee and Dept has many to One relationship here. I am using Fluent NHibernate to add records. All codes below. Pls help - SQL Code create table Dept ( Id int primary key identity, DeptName varchar(20), DeptLocation varchar(20)) create table Employee ( Id int primary key identity, EmpName varchar(20),EmpAge int, DeptId int references Dept(Id)) Class Files public partial class Dept { public virtual System.String DeptLocation { get; set; } public virtual System.String DeptName { get; set; } public virtual System.Int32 Id { get; private set; } public virtual IList<Employee> Employees { get; set; } } public partial class Employee { public virtual System.Int32 DeptId { get; set; } public virtual System.Int32 EmpAge { get; set; } public virtual System.String EmpName { get; set; } public virtual System.Int32 Id { get; private set; } public virtual Project.Model.Dept Dept { get; set; } } Mapping Files public class DeptMapping : ClassMap { public DeptMapping() { Id(x = x.Id); Map(x = x.DeptName); Map(x = x.DeptLocation); HasMany(x = x.Employees) .Inverse() .Cascade.All(); } } public class EmployeeMapping : ClassMap { public EmployeeMapping() { Id(x = x.Id); Map(x = x.EmpName); Map(x = x.EmpAge); Map(x = x.DeptId); References(x = x.Dept) .Cascade.None(); } } My Code to add try { Dept dept = new Dept(); dept.DeptLocation = "Austin"; dept.DeptName = "Store"; Employee emp = new Employee(); emp.EmpName = "Ron"; emp.EmpAge = 30; IList<Employee> empList = new List<Employee>(); empList.Add(emp); dept.Employees = empList; emp.Dept = dept; IRepository<Dept> rDept = new Repository<Dept>(); rDept.SaveOrUpdate(dept); } catch (Exception ex) { Console.WriteLine(ex.Message); } Here i am getting error as InnerException = {"Invalid column name 'Dept_id'."} Message = "could not insert: [Project.Model.Employee][SQL: INSERT INTO [Employee] (EmpName, EmpAge, DeptId, Dept_id) VALUES (?, ?, ?, ?); select SCOPE_IDENTITY()]"

    Read the article

  • N-tier Repository POCOs - Aggregates?

    - by Sam
    Assume the following simple POCOs, Country and State: public partial class Country { public Country() { States = new List<State>(); } public virtual int CountryId { get; set; } public virtual string Name { get; set; } public virtual string CountryCode { get; set; } public virtual ICollection<State> States { get; set; } } public partial class State { public virtual int StateId { get; set; } public virtual int CountryId { get; set; } public virtual Country Country { get; set; } public virtual string Name { get; set; } public virtual string Abbreviation { get; set; } } Now assume I have a simple respository that looks something like this: public partial class CountryRepository : IDisposable { protected internal IDatabase _db; public CountryRepository() { _db = new Database(System.Configuration.ConfigurationManager.AppSettings["DbConnName"]); } public IEnumerable<Country> GetAll() { return _db.Query<Country>("SELECT * FROM Countries ORDER BY Name", null); } public Country Get(object id) { return _db.SingleById(id); } public void Add(Country c) { _db.Insert(c); } /* ...And So On... */ } Typically in my UI I do not display all of the children (states), but I do display an aggregate count. So my country list view model might look like this: public partial class CountryListVM { [Key] public int CountryId { get; set; } public string Name { get; set; } public string CountryCode { get; set; } public int StateCount { get; set; } } When I'm using the underlying data provider (Entity Framework, NHibernate, PetaPoco, etc) directly in my UI layer, I can easily do something like this: IList<CountryListVM> list = db.Countries .OrderBy(c => c.Name) .Select(c => new CountryListVM() { CountryId = c.CountryId, Name = c.Name, CountryCode = c.CountryCode, StateCount = c.States.Count }) .ToList(); But when I'm using a repository or service pattern, I abstract away direct access to the data layer. It seems as though my options are to: Return the Country with a populated States collection, then map over in the UI layer. The downside to this approach is that I'm returning a lot more data than is actually needed. -or- Put all my view models into my Common dll library (as opposed to having them in the Models directory in my MVC app) and expand my repository to return specific view models instead of just the domain pocos. The downside to this approach is that I'm leaking UI specific stuff (MVC data validation annotations) into my previously clean POCOs. -or- Are there other options? How are you handling these types of things?

    Read the article

  • One to many in nhibernate mapping problem

    - by chobo2
    Hi I have this using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Demo.Framework.Domain { public class UserEntity { public virtual Guid UserId { get; protected set; } } } using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TDemo.Framework.Domain { public class Users : UserEntity { public virtual string OpenIdIdentifier { get; set; } public virtual string Email { get; set; } public virtual IList<Movie> Movies { get; set; } } } using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Demo.Framework.Domain { public class Movie { public virtual int MovieId { get; set; } public virtual Guid UserId { get; set; } // not sure if I should inherit UserEntity public virtual string Title { get; set; } public virtual DateTime ReleaseDate { get; set; } // in my ms sql 2008 database I want this to be just a Date type. Not sure how to do that. public virtual int Upc { get; set; } } } <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Demo.Framework" namespace="Demo.Framework.Domain"> <class name="Users"> <id name="UserId"> <generator class="guid.comb" /> </id> <property name="OpenIdIdentifier" not-null="true" /> <property name="Email" not-null="true" /> </class> <subclass name="Movie"> <list name="Movies" cascade="all-delete-orphan"> <key column="MovieId" /> <index column="MovieIndex" /> // not sure what index column is really. <one-to-many class="Movie"/> </list> </subclass> </hibernate-mapping> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Demo.Framework" namespace="Demo.Framework.Domain"> <class name="Movie"> <id name="MovieId"> <generator class="native" /> </id> <property name="Title" not-null="true" /> <property name="ReleaseDate" not-null="true" type="Date" /> <property name="Upc" not-null="true" /> <property name="UserId" not-null="true" type="Guid"/> </class> </hibernate-mapping> I get this error 'extends' attribute is not found or is empty. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: NHibernate.MappingException: 'extends' attribute is not found or is empty. Source Error: Line 17: { Line 18: Line 19: var nhConfig = new Configuration().Configure(); Line 20: var sessionFactory = nhConfig.BuildSessionFactory(); Line 21:

    Read the article

  • Commit in SQL

    - by PRajkumar
    SQL Transaction Control Language Commands (TCL)                                           (COMMIT) Commit Transaction As a SQL language we use transaction control language very frequently. Committing a transaction means making permanent the changes performed by the SQL statements within the transaction. A transaction is a sequence of SQL statements that Oracle Database treats as a single unit. This statement also erases all save points in the transaction and releases transaction locks. Oracle Database issues an implicit COMMIT before and after any data definition language (DDL) statement. Oracle recommends that you explicitly end every transaction in your application programs with a COMMIT or ROLLBACK statement, including the last transaction, before disconnecting from Oracle Database. If you do not explicitly commit the transaction and the program terminates abnormally, then the last uncommitted transaction is automatically rolled back.   Until you commit a transaction: ·         You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit ·         You can roll back (undo) any changes made during the transaction with the ROLLBACK statement   Note: Most of the people think that when we type commit data or changes of what you have made has been written to data files, but this is wrong when you type commit it means that you are saying that your job has been completed and respective verification will be done by oracle engine that means it checks whether your transaction achieved consistency when it finds ok it sends a commit message to the user from log buffer but not from data buffer, so after writing data in log buffer it insists data buffer to write data in to data files, this is how it works.   Before a transaction that modifies data is committed, the following has occurred: ·         Oracle has generated undo information. The undo information contains the old data values changed by the SQL statements of the transaction ·         Oracle has generated redo log entries in the redo log buffer of the System Global Area (SGA). The redo log record contains the change to the data block and the change to the rollback block. These changes may go to disk before a transaction is committed ·         The changes have been made to the database buffers of the SGA. These changes may go to disk before a transaction is committed   Note:   The data changes for a committed transaction, stored in the database buffers of the SGA, are not necessarily written immediately to the data files by the database writer (DBWn) background process. This writing takes place when it is most efficient for the database to do so. It can happen before the transaction commits or, alternatively, it can happen some times after the transaction commits.   When a transaction is committed, the following occurs: 1.      The internal transaction table for the associated undo table space records that the transaction has committed, and the corresponding unique system change number (SCN) of the transaction is assigned and recorded in the table 2.      The log writer process (LGWR) writes redo log entries in the SGA's redo log buffers to the redo log file. It also writes the transaction's SCN to the redo log file. This atomic event constitutes the commit of the transaction 3.      Oracle releases locks held on rows and tables 4.      Oracle marks the transaction complete   Note:   The default behavior is for LGWR to write redo to the online redo log files synchronously and for transactions to wait for the redo to go to disk before returning a commit to the user. However, for lower transaction commit latency application developers can specify that redo be written asynchronously and that transaction do not need to wait for the redo to be on disk.   The syntax of Commit Statement is   COMMIT [WORK] [COMMENT ‘your comment’]; ·         WORK is optional. The WORK keyword is supported for compliance with standard SQL. The statements COMMIT and COMMIT WORK are equivalent. Examples Committing an Insert INSERT INTO table_name VALUES (val1, val2); COMMIT WORK; ·         COMMENT Comment is also optional. This clause is supported for backward compatibility. Oracle recommends that you used named transactions instead of commit comments. Specify a comment to be associated with the current transaction. The 'text' is a quoted literal of up to 255 bytes that Oracle Database stores in the data dictionary view DBA_2PC_PENDING along with the transaction ID if a distributed transaction becomes in doubt. This comment can help you diagnose the failure of a distributed transaction. Examples The following statement commits the current transaction and associates a comment with it: COMMIT     COMMENT 'In-doubt transaction Code 36, Call (415) 555-2637'; ·         WRITE Clause Use this clause to specify the priority with which the redo information generated by the commit operation is written to the redo log. This clause can improve performance by reducing latency, thus eliminating the wait for an I/O to the redo log. Use this clause to improve response time in environments with stringent response time requirements where the following conditions apply: The volume of update transactions is large, requiring that the redo log be written to disk frequently. The application can tolerate the loss of an asynchronously committed transaction. The latency contributed by waiting for the redo log write to occur contributes significantly to overall response time. You can specify the WAIT | NOWAIT and IMMEDIATE | BATCH clauses in any order. Examples To commit the same insert operation and instruct the database to buffer the change to the redo log, without initiating disk I/O, use the following COMMIT statement: COMMIT WRITE BATCH; Note: If you omit this clause, then the behavior of the commit operation is controlled by the COMMIT_WRITE initialization parameter, if it has been set. The default value of the parameter is the same as the default for this clause. Therefore, if the parameter has not been set and you omit this clause, then commit records are written to disk before control is returned to the user. WAIT | NOWAIT Use these clauses to specify when control returns to the user. The WAIT parameter ensures that the commit will return only after the corresponding redo is persistent in the online redo log. Whether in BATCH or IMMEDIATE mode, when the client receives a successful return from this COMMIT statement, the transaction has been committed to durable media. A crash occurring after a successful write to the log can prevent the success message from returning to the client. In this case the client cannot tell whether or not the transaction committed. The NOWAIT parameter causes the commit to return to the client whether or not the write to the redo log has completed. This behavior can increase transaction throughput. With the WAIT parameter, if the commit message is received, then you can be sure that no data has been lost. Caution: With NOWAIT, a crash occurring after the commit message is received, but before the redo log record(s) are written, can falsely indicate to a transaction that its changes are persistent. If you omit this clause, then the transaction commits with the WAIT behavior. IMMEDIATE | BATCH Use these clauses to specify when the redo is written to the log. The IMMEDIATE parameter causes the log writer process (LGWR) to write the transaction's redo information to the log. This operation option forces a disk I/O, so it can reduce transaction throughput. The BATCH parameter causes the redo to be buffered to the redo log, along with other concurrently executing transactions. When sufficient redo information is collected, a disk write of the redo log is initiated. This behavior is called "group commit", as redo for multiple transactions is written to the log in a single I/O operation. If you omit this clause, then the transaction commits with the IMMEDIATE behavior. ·         FORCE Clause Use this clause to manually commit an in-doubt distributed transaction or a corrupt transaction. ·         In a distributed database system, the FORCE string [, integer] clause lets you manually commit an in-doubt distributed transaction. The transaction is identified by the 'string' containing its local or global transaction ID. To find the IDs of such transactions, query the data dictionary view DBA_2PC_PENDING. You can use integer to specifically assign the transaction a system change number (SCN). If you omit integer, then the transaction is committed using the current SCN. ·         The FORCE CORRUPT_XID 'string' clause lets you manually commit a single corrupt transaction, where string is the ID of the corrupt transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to specify this clause. ·         Specify FORCE CORRUPT_XID_ALL to manually commit all corrupt transactions. You must have DBA privileges to specify this clause. Examples Forcing an in doubt transaction. Example The following statement manually commits a hypothetical in-doubt distributed transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to issue this statement. COMMIT FORCE '22.57.53';

    Read the article

  • Solaris 10 branded zone VM Templates for Solaris 11 on OTN

    - by jsavit
    Early this year I wrote the article Ours Goes To 11 which describes the ability to import Solaris 10 systems into a "Solaris 10 branded zone" under Oracle Solaris 11. I did this using Solaris 11 Express, and the capability remains in Solaris 11 with only slight changes. This important tool lets you painlessly inhaling a Solaris Container from Solaris 10 or entire Solaris 10 systems ("the global zone") into virtualized environments on a Solaris 11 OS. Just recently, Oracle provided Oracle VM Templates for Oracle Solaris 10 Zones to let you create Solaris 10 branded zones for Solaris 11 even if you don't currently have access to install media or a running Solaris 10 system. To use this, just download the Oracle VM Template for Oracle Solaris Zone 10 from OTN at http://www.oracle.com/technetwork/server-storage/solaris11/downloads/virtual-machines-1355605.html. This page contains images of Oracle Solaris 10 8/11 (the recent update to Solaris 10) in SPARC and x86 formats suitable for creating branded zones. The same page also has a VirtualBox image you can download for a complete Solaris 10 install in a guest virtual machine you can run on any host OS that supports VirtualBox. Both sets of downloads provide a quick - and extremely easy - way to set up a virtual Solaris 10 environment. In the case of the Oracle VM Templates, they illustrate several advanced features of Solaris 11. To start, just go to the above link, download the template for the hardware platform (SPARC or x86) you want, and download the README file also linked from that page. Install prerequisites The README file tells you to install the prerequisite Solaris 11 package that implements the Solaris 10 brand. Then you can install instances of zones with that brand. # pkg install pkg:/system/zones/brand/brand-solaris10 Packages to install: 1 Create boot environment: No Create backup boot environment: Yes DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 44/44 0.4/0.4 PHASE ACTIONS Install Phase 74/74 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 That took only a few minutes, and didn't require a reboot. Install the Solaris 10 zone Now it's time to run the downloaded template file. First make it executable via the chmod command, of course. I found that (unlike stated in the README) there was no need to rename the downloaded file to remove the .bin. When you run it you provide several parameters to describe the zone configuration: -a IP address - the IP address and optional netmask for the zone. This is the only mandatory parameter. -z zonename - the name of the zone you would like to create. -i interface - the package will create an exclusive-IP zone using a virtual NIC (vnic) based on this physical interface. In my case, I have a NIC called rge0. -p PATH - specifies the path in which you want the zoneroot to be placed. In my case, I have a ZFS dataset mounted at /zones, and this will create a zoneroot at /zones/s10u10. Kicking it off, you will see a copyright message, and then messages showing progress building the zone, which only takes a few minutes. # ./solaris-10u10-x86.bin -p /zones -a 192.168.1.100 -i rge0 -z s10u10 ... ... Checking disk-space for extraction Ok Extracting in /export/home/CDimages/s10zone/bootimage.ihaqvh ... 100% [===============================] Checking data integrity Ok Checking platform compatibility The host and the image do not have the same Solaris release: host Solaris release: 5.11 image Solaris release: 5.10 Will create a Solaris 10 branded zone. Warning: could not find a defaultrouter Zone won't have any defaultrouter configured IMAGE: ./solaris-10u10-x86.bin ZONE: s10u10 ZONEPATH: /zones/s10u10 INTERFACE: rge0 VNIC: vnicZBI13379 MAC ADDR: 2:8:20:5c:1a:cc IP ADDR: 192.168.1.100 NETMASK: 255.255.255.0 DEFROUTER: NONE TIMEZONE: US/Arizona Checking disk-space for installation Ok Installing in /zones/s10u10 ... 100% [===============================] Using a static exclusive-IP Attaching s10u10 Booting s10u10 Waiting for boot to complete booting... booting... booting... Zone s10u10 booted The zone's root password has been set using the root password of the local host. You can change the zone's root password to further harden the security of the zone: being root, log into the zone from the local host with the command 'zlogin s10u10'. Once logged in, change the root password with the command 'passwd'. The nifty part in my opinion (besides being so easy), is that the zone was created as an exclusive-IP zone on a virtual NIC. This network configuration lets you enforce traffic isolation from other zones, enforce network Quality of Service, and even let the zone set its own characteristics like IP address and packet size. Independence of the zone's network characteristics from the global zone is one of the enhancements in Solaris 10 that make it easier to consolidate zones while preserving their autonomy, yet provide control in a consolidated environment. Let's see what the virtual network environment looks like by issuing commands from the Solaris 11 global zone. First I'll use Old School ifconfig, and then I'll use the new ipadm and dladm commands. # ifconfig -a4 lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 rge0: flags=1004943<UP,BROADCAST,RUNNING,PROMISC,MULTICAST,DHCP,IPv4> mtu 1500 index 2 inet 192.168.1.3 netmask ffffff00 broadcast 192.168.1.255 ether 0:14:d1:18:ac:bc vboxnet0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3 inet 192.168.56.1 netmask ffffff00 broadcast 192.168.56.255 ether 8:0:27:f8:62:1c # dladm show-phys LINK MEDIA STATE SPEED DUPLEX DEVICE yge0 Ethernet unknown 0 unknown yge0 yge1 Ethernet unknown 0 unknown yge1 rge0 Ethernet up 1000 full rge0 vboxnet0 Ethernet up 1000 full vboxnet0 # dladm show-link LINK CLASS MTU STATE OVER yge0 phys 1500 unknown -- yge1 phys 1500 unknown -- rge0 phys 1500 up -- vboxnet0 phys 1500 up -- vnicZBI13379 vnic 1500 up rge0 s10u10/vnicZBI13379 vnic 1500 up rge0 s10u10/net0 vnic 1500 up rge0 # dladm show-vnic LINK OVER SPEED MACADDRESS MACADDRTYPE VID vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/net0 rge0 1000 2:8:20:9d:d0:79 random 0 # ipadm show-addr ADDROBJ TYPE STATE ADDR lo0/v4 static ok 127.0.0.1/8 rge0/_a dhcp ok 192.168.1.3/24 vboxnet0/_a static ok 192.168.56.1/24 lo0/v6 static ok ::1/128 Log into the zone The install step already booted the zone, so lets log into it. Notice how you have to be appropriately privileged to log into a zone. This is my home system so I'm being a bit cavalier, but in a production environment you can give granular control of who can login to which zones. Voila! a Solaris 10 environment under a Solaris 11 kernel. Notice the output from the uname -a and ifconfig commands, and output from a ping to a nearby host. $ zlogin s10u10 zlogin: You lack sufficient privilege to run this command (all privs required) savit@home:~$ sudo zlogin s10u10 Password: [Connected to zone 's10u10' pts/5] Oracle Corporation SunOS 5.10 Generic Patch January 2005 # uname -a SunOS s10u10 5.10 Generic_Virtual i86pc i386 i86pc # ifconfig -a4 lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc # bash bash-3.2# ifconfig -a lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc bash-3.2# ping 192.168.1.2 192.168.1.2 is alive For fun, I configured Apache (setting its configuration file in /etc/apache2) and brought it up. Easy - took just a few minutes. bash-3.2# svcs apache2 STATE STIME FMRI disabled 12:38:46 svc:/network/http:apache2 bash-3.2# svcadm enable apache2 Summary In just a few minutes, I built a functioning virtual Solaris 10 environment under by Solaris 11 system. It was... easy! While I can still do it the manual way (creating and using a system archive), this is a low-effort way to create a Solaris 10 zone on Solaris 11.

    Read the article

  • 202 blog articles

    - by mprove
    All my blog articles under blogs.oracle.com since August 2005: 202 blog articles Apr 2012 blogs.oracle.com design patch Mar 2012 Interaction 12 - Critique Mar 2012 Typing. Clicking. Dancing. Feb 2012 Desktop Mobility in Hospitals with Oracle VDI /video Feb 2012 Interaction 12 in Dublin - Highlights of Day 3 Feb 2012 Interaction 12 in Dublin - Highlights of Day 2 Feb 2012 Interaction 12 in Dublin - Highlights of Day 1 Feb 2012 Shit Interaction Designers Say Feb 2012 Tips'n'Tricks for WebCenter #3: How to display custom page titles in Spaces Jan 2012 Tips'n'Tricks for WebCenter #2: How to create an Admin menu in Spaces and save a lot of time Jan 2012 Tips'n'Tricks for WebCenter #1: How to apply custom resources in Spaces Jan 2012 Merry XMas and a Happy 2012! Dec 2011 One Year Oracle SocialChat - The Movie Nov 2011 Frank Ludolph's Last Working Day Nov 2011 Hans Rosling at TED Oct 2011 200 Countries x 200 Years Oct 2011 Blog Aggregation for Desktop Virtualization Oct 2011 Oracle VDI at OOW 2011 Sep 2011 Design for Conversations & Conversations for Design Sep 2011 All Oracle UX Blogs Aug 2011 Farewell Loriot Aug 2011 Oracle VDI 3.3 Overview Aug 2011 Sutherland's Closing Remarks at HyperKult Aug 2011 Surface and Subface Aug 2011 Back to Childhood in UI Design Jul 2011 The Art of Engineering and The Engineering of Art Jul 2011 Oracle VDI Seminar - June-30 Jun 2011 SGD White Paper May 2011 TEDxHamburg Live Feed May 2011 Oracle VDI in 3 Minutes May 2011 Space Ship Earth 2011 May 2011 blog moving times Apr 2011 Frozen tag cloud Apr 2011 Oracle: Hardware Software Complete in 1953 Apr 2011 Interaction Design with Wireframes Apr 2011 A guide to closing down a project Feb 2011 Oracle VDI 3.2.2 Jan 2011 free VDI charts Jan 2011 Sun Founders Panel 2006 Dec 2010 Sutherland on Leadership Dec 2010 SocialChat: Efficiency of E20 Dec 2010 ALWAYS ON Desktop Virtualization Nov 2010 12,000 Desktops at JavaOne Nov 2010 SocialChat on Sharing Best Practices Oct 2010 Globe of Visitors Oct 2010 SocialChat about the Next Big Thing Oct 2010 Oracle VDI UX Story - Wireframes Oct 2010 What's a PC anyway? Oct 2010 SocialChat on Getting Things Done Oct 2010 SocialChat on Infoglut Oct 2010 IT Twenty Twenty Oct 2010 Desktop Virtualization Webcasts from OOW Oct 2010 Oracle VDI 3.2 Overview Sep 2010 Blog Usability Top 7 Sep 2010 100 and counting Aug 2010 Oracle'izing the VDI Blogs Aug 2010 SocialChat on Apple Aug 2010 SocialChat on Video Conferencing Aug 2010 Oracle VDI 3.2 - Features and Screenshots Aug 2010 SocialChat: Don't stop making waves Aug 2010 SocialChat: Giving Back to the Community Aug 2010 SocialChat on Learning in Meetings Aug 2010 iPAD's Natural User Interface Jul 2010 Last day for Sun Microsystems GmbH Jun 2010 SirValUse Celebration Snippets Jun 2010 10 years SirValUse - Happy Birthday! Jun 2010 Wim on Virtualization May 2010 New Home for Oracle VDI Apr 2010 Renaissance Slide Sorter Comments Apr 2010 Unboxing Sun Ray 3 Plus Apr 2010 Desktop Virtualisierung mit Sun VDI 3.1 Apr 2010 Blog Relaunch Mar 2010 Social Messaging Slides from CeBIT Mar 2010 Social Messaging Talk at CeBIT Feb 2010 Welcome Oracle Jan 2010 My last presentation at Sun Jan 2010 Ivan Sutherland on Leadership Jan 2010 Learning French with Sun VDI Jan 2010 Learning Danish with Sun Ray Jan 2010 VDI workshop in Nieuwegein Jan 2010 Happy New Year 2010 Jan 2010 On Creating Slides Dec 2009 Best VDI Ever Nov 2009 How to store the Big Bang Nov 2009 Social Enterprise Tools. Beipiel Sun. Nov 2009 Nov-19 Nov 2009 PDF and ODF links on your blog Nov 2009 Q&A on VDI and MySQL Cluster Nov 2009 Zürich next week: Swiss Intranet Summit 09 Nov 2009 Designing for a Sustainable World - World Usabiltiy Day, Nov-12 Nov 2009 How to export a desktop from VDI 3 Nov 2009 Virtualisation Roadshow in the UK Nov 2009 Project Wonderland at EDUCAUSE 09 Nov 2009 VDI Roadshow in Dublin, Nov-26, 2009 Nov 2009 Sun VDI at EDUCAUSE 09 Nov 2009 Sun VDI 3.1 Architecture and New Features Oct 2009 VDI 3.1 is Early-Access Sep 2009 Virtualization for MySQL on VMware Sep 2009 Silpion & 13. Stock Sommerparty Sep 2009 Sun Ray and VMware View 3.1.1 2009-08-31 New Set of Sun Ray Status Icons 2009-08-25 Virtualizing the VDI Core? 2009-08-23 World Usability Day Hamburg 2009 - CfP 2009-07-16 Rising Sun 2009-07-15 featuring twittermeme 2009-06-19 ISC09 Student Party on June-20 /Hamburg 2009-06-18 Before and behind the curtain of JavaOne 2009-06-09 20k desktops at JavaOne 2009-06-01 sweet microblogging 2009-05-25 VDI 3 - Why you need 3 VDI hosts and what you can do about that? 2009-05-21 IA Konferenz 2009 2009-05-20 Sun VDI 3 UX Story - Power of the Web 2009-05-06 Planet of Sun and Oracle User Experience Design 2009-04-22 Sun VDI 3 UX Story - User Research 2009-04-08 Sun VDI 3 UX Story - Concept Workshops 2009-04-06 Localized documentation for Sun Ray Connector for VMware View Manager 1.1 2009-04-03 Sun VDI 3 Press Release 2009-03-25 Sun VDI 3 launches today! 2009-03-25 Sun Ray Connector for VMware View Manager 1.1 Update 2009-03-11 desktop virtualization wiki relaunch 2009-03-06 VDI 3 at CeBIT hall 6, booth E36 2009-03-02 Keyboard layout problems with Sun Ray Connector for VMware VDM 2009-02-23 wikis.sun.com tips & tricks 2009-02-23 Sun VDI 3 is in Early Access 2009-02-09 VirtualCenter unable to decrypt passwords 2009-02-02 Sun & VMware Desktop Training 2009-01-30 VDI at next09? 2009-01-16 Sun VDI: How to use virtual machines with multiple network adapters 2009-01-07 Sun Ray and VMware View 2009-01-07 Hamburg World Usability Day 2008 - Webcasts 2009-01-06 Sun Ray Connector for VMware VDM slides 2008-12-15 mother of all demos 2008-12-08 Build your own Thumper 2008-12-03 Troubleshooting Sun Ray Connector for VMware VDM 2008-12-02 My Roller Tag Cloud 2008-11-28 Sun Ray Connector: SSL connection to VDM 2008-11-25 Setting up SSL and Sun Ray Connector for VMware VDM 2008-11-13 Inspiration for Today and Tomorrow 2008-10-23 Sun Ray Connector for VMware VDM released 2008-10-14 From Sketchpad to ILoveSketch 2008-10-09 Desktop Virtualization on Xing 2008-10-06 User Experience Forum on Xing 2008-10-06 Sun Ray Connector for VMware VDM certified 2008-09-17 Virtual Clouds over Las Vegas 2008-09-14 Bill Verplank sketches metaphors 2008-09-04 End of Early Access - Sun Ray Connector for VMware 2008-08-27 Early Access: Sun Ray Connector for VMware Virtual Desktop Manager 2008-08-12 Sun Virtual Desktop Connector - Insides on Recycling Part 2 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling Part 3 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling 2008-07-20 lost in wiki space 2008-07-07 Evolution of the Desktop 2008-06-17 Virtual Desktop Webcast 2008-06-16 Woodstock 2008-06-16 What's a Desktop PC anyway? 2008-06-09 Virtual-T-Box 2008-06-05 Virtualization Glossary 2008-05-06 Five User Experience Principles 2008-04-25 Virtualization News Feed 2008-04-21 Acetylcholinesterase - Second Season 2008-04-18 Acetylcholinesterase - End of Signal 2007-12-31 Produkt-Management ist... 2007-10-22 Usability Verbände, Verteiler und Netzwerke. 2007-10-02 The Meaning is the Message 2007-09-28 Visualization Methods 2007-09-10 Inhouse und Open Source Projekte – Usability verankern und Synergien nutzen 2007-09-03 Der Schwabe Darth Vader entdeckt das Virale Marketing 2007-08-29 Dick Hardt 3.0 on Identity 2.0 2007-08-27 quality of written text depends on the tool 2007-07-27 podcasts for reboot9 2007-06-04 It is the user's itch that need to be scratched 2007-05-25 A duel at reboot9 2007-05-14 Taxonomien und Folksonomien - Tagging als neues HCI-Element 2007-05-10 Dueling Interaction Models of Personal-Computing and Web-Computing 2007-03-01 22.März: Weizenbaum. Rebel at Work. /Filmpremiere Hamburg 2007-02-25 Bruce Sterling at UbiComp 2006 /webcast 2006-11-12 FSOSS 2006 /webcasts 2006-11-10 Highway 101 2006-11-09 User Experience Roundtable Hamburg: EuroGEL 2006 2006-11-08 Douglas Adams' Hyperland (BBC 1990) 2006-10-08 Taxonomien und Folksonomien – Tagging als neues HCI-Element 2006-09-13 Usability im Unternehmen 2006-09-13 Doug does HyperScope 2006-08-26 TED Talks and TechTalks 2006-08-21 Kai Krause über seine Freundschaft zu Douglas Adams 2006-07-20 Rebel At Work: Film Portrait on Weizenbaum 2006-07-04 Gabriele Fischer, mp3 2006-06-07 Dick Hardt at ETech 06 2006-06-05 Weinberger: From Control to Conversation 2006-04-16 Eye Tracking at User Experience Roundtable Hamburg 2006-04-14 dropping knowledge 2006-04-09 GEL 2005 2006-03-13 slide photos of reboot7 2006-03-04 Dick Hardt on Identity 2.0 2006-02-28 User Experience Newsletter #13: Versioning 2006-02-03 Ester Dyson on Choice and Happyness 2006-02-02 Requirements-Engineering im Spannungsfeld von Individual- und Produktsoftware 2006-01-15 User Experience Newsletter #12: Intuition Quiz 2005-11-30 User Experience und Requirements-Engineering für Software-Projekte 2005-10-31 Ivan Sutherland on "Research and Fun" 2005-10-18 Ars Electronica / Mensch und Computer 2005 2005-09-14 60 Jahre nach Memex: Über die Unvereinbarkeit von Desktop- und Web-Paradigma 2005-08-31 reboot 7 2005-06-30

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • Finding nuggets in ARC discussions

    - by alanc
    A bit over twenty years ago, Sun formed an Architecture Review Committee (ARC) that evaluates proposals to change interfaces between components in Sun software products. During the OpenSolaris days, we opened many of these discussions to the community. While they’re back behind closed doors, and at a different company now, we still continue to hold these reviews for the software from what’s now the Sun Systems Group division of Oracle. Recently one of these reviews was held (via e-mail discussion) to review a proposal to update our GNU findutils package to the latest upstream release. One of the upstream changes discussed was the addition of an “oldfind” program. In findutils 4.3, find was modified to use the fts() function to walk the directory tree, and oldfind was created to provide the old mechanism in case there were bugs in the new implementation that users needed to workaround. In Solaris 11 though, we still ship the find descended from SVR4 as /usr/bin/find and the GNU find is available as either /usr/bin/gfind or /usr/gnu/bin/find. This raised the discussion of if we should add oldfind, and if so what should we call it. Normally our policy is to only add the g* names for GNU commands that conflict with an existing Solaris command – for instance, we ship /usr/bin/emacs, not /usr/bin/gemacs. In this case however, that seemed like it would be more confusing to have /usr/bin/oldfind be the older version of /usr/bin/gfind not of /usr/bin/find. Thus if we shipped it, it would make more sense to call it /usr/bin/goldfind, which several ARC members noted read more naturally as “gold find” than as “g old find”. One of the concerns we often discuss in ARC is if a change is likely to be understood by users or if it will result in more calls to support. As we hit this part of the discussion on a Friday at the end of a long week, I couldn’t resist putting forth a hypothetical support call for this command: “Hello, Oracle Solaris Support, how may I help you?” “My admin is out sick, but he sent an email that he put the findutils package on our server, and I can run goldfind now. I tried it, but goldfind didn’t find gold.” “Did he get the binutils package too?” “No he just said findutils, do we need binutils?” “Well, gold comes in the binutils package, so goldfind would be able to find gold if you got that package.” “How much does Oracle charge for that package?” “It’s free for Solaris users.” “You mean Oracle ships packages of gold to customers for free?” “Yes, if you get the binutils package, it includes GNU gold.” “New gold? Is that some sort of alchemy, turning stuff into gold?” “Not new gold, gold from the GNU project.” “Oracle’s taking gold from the GNU project and shipping it to me?” “Yes, if you get binutils, that package includes gold along with the other tools from the GNU project.” “And GNU doesn’t mind Oracle taking their gold and giving it to customers?” “No, GNU is a non-profit whose goal is to share their software.” “Sharing software sure, but gold? Where does a non-profit like GNU get gold anyway?” “Oh, Google donated it to them.” “Ah! So Oracle will give me the gold that GNU got from Google!” “Yes, if you get the package from us.” “How do I get the package with the gold?” “Just run pkg install binutils and it will put it on your disk.” “We’ve got multiple disks here - which one will it put it on?” “The one with the system image - do you know which one that is? “Well the note from the admin says the system is on the first disk and the users are on the second disk.” “Okay, so it should go on the first disk then.” “And where will I find the gold?” “It will be in the /usr/bin directory.” “In the user’s bin? So thats on the second disk?” “No, it would be on the system disk, with the other development tools, like make, as, and what.” “So what’s on the first disk?” “Well if the system image is there the commands should all be there.” “All the commands? Not just what?” “Right, all the commands that come with the OS, like the shell, ps, and who.” “So who’s on the first disk too?” “Yes. Did your admin say when he’d be back?” “No, just that he had a massive headache and was going home after I tried to get him to explain this stuff to me.” “I can’t imagine why.” “Oh, is why a command too?” “No, _why was a Ruby programmer.” “Ruby? Do you give those away with the gold too?” “Yes, but it comes in the ruby package, not binutils.” “Oh, I’ll have to have my admin get that package too! Thanks!” Needless to say, we decided this might not be the best idea. Since the GNU package hasn’t had to release a serious bug fix in the new find in the past few years, the new GNU find seems pretty stable, and we always have the SVR4 find to use as a fallback in Solaris, so it didn’t seem that adding oldfind was really necessary, so we passed on including it when we update to the new findutils release. [Apologies to Abbott, Costello, their fans, and everyone who read this far. The Gold (linker) page on Wikipedia may explain some of the above, but can’t explain why goldfind is the old GNU find, but gold is the new GNU ld.]

    Read the article

  • c#Repository pattern: One repository per subclass?

    - by Alex
    I am wondering if you would create a repository for each subclass of a domain model. There are two classes for example: public class Person { public virtual String GivenName { set; get; } public virtual String FamilyName { set; get; } public virtual String EMailAdress { set; get; } } public class Customer : Person { public virtual DateTime RegistrationDate { get; set; } public virtual String Password { get; set; } } Would you create both a PersonRepository and a CustomerRepository or just the PersonRepository which would also be able to execute Customer related queries?

    Read the article

  • It&rsquo;s A Team Sport: PASS Board Year 2, Q3

    - by Denise McInerney
    As I type this I’m on an airplane en route to my 12th PASS Summit. It’s been a very busy 3.5 months since my last post on my work as a Board member. Nearing the end of my 2-year term I am struck by how much has happened, and yet how fast the time has gone. But I’ll save the retrospective post for next time and today focus on what happened in Q3. In the last three months we made progress on several fronts, thanks to the contributions of many volunteers and HQ staff members. They deserve our appreciation for their dedication to delivering for the membership week after week. Virtual Chapters The Virtual Chapters continue to provide many PASS members with valuable free training. Between July and September of 2013 VCs hosted over 50 webinars with a total of 4300 attendees. This quarter also saw the launch of the Security & Global Russian VCs. Both are off to a strong start and I welcome these additions to the Virtual Chapter portfolio. At the beginning of 2012 we had 14 Virtual Chapters. Today we have 22. This growth has been exciting to see. It has also created a need to have more volunteers help manage the work of the VCs year-round. We have renewed focus on having Virtual Chapter Mentors work with the VC Leaders and other volunteers. I am grateful to volunteers Julie Koesmarno, Thomas LeBlanc and Marcus Bittencourt who join original VC Mentor Steve Simon on this team. Thank you for stepping up to help. Many improvements to the VC web sites have been rolling out over the past few weeks. Our marketing and IT teams have been busy working a new look-and-feel, features and a logo for each VC. They have given the VCs a fresh, professional look consistent with the rest of the PASS branding, and all VCs now have a logo that connects to PASS and the particular focus of the chapter. 24 Hours of PASS The Summit Preview edition  of 24HOP was held on July 31 and by all accounts was a success. Our first use of the GoToWebinar platform for this event went extremely well. Thanks to our speakers, moderators and sponsors for making this event possible. Special thanks to HQ staffers Vicki Van Damme and Jane Duffy for a smoothly run event. Coming up: the 24HOP Portuguese Edition will be held November 13-14, followed December 12-13 by the Spanish Edition. Thanks to the Portuguese- and Spanish-speaking community volunteers who are organizing these events. July Board Meeting The Board met July 18-19 in Kansas City. The first order of business was the election of the Executive Committee who will take office January 1. I was elected Vice President of Marketing and will join incoming President Thomas LaRock, incoming Executive Vice President of Finance Adam Jorgensen and Immediate Past President Bill Graziano on the Exec Co. I am honored that my fellow Board members elected me to this position and look forward to serving the organization in this role. Visit to PASS HQ In late September I traveled to Vancouver for my first visit to PASS HQ, where I joined Tom LaRock and Adam Jorgensen to make plans for 2014.  Our visit was just a few weeks before PASS Summit and coincided with the Board election, and the office was humming with activity. I saw first-hand the enthusiasm and dedication of everyone there. In each interaction I observed a focus on what is best for PASS and our members. Our partners at HQ are key to the organization’s success. This week at PASS Summit is a great opportunity for all of us to remember that, and say “thanks.” Next Up PASS Summit—of course! I’ll be around all week and look forward to connecting with many of our member over meals, at the Community Zone and between sessions. In the evenings you can find me at the Welcome Reception, Exhibitor’s Reception and Community Appreciation Party. And I will be at the Board Q&A session  Friday at 12:45 p.m. Transitions The newly elected Exec Co and Board members take office January 1, and the Virtual Chapter portfolio is transitioning to a new director. I’m thrilled that Jen Stirrup will be taking over. Jen has experience as a volunteer and co-leader of the Business Intelligence Virtual Chapter and was a key contributor to the BI VCs expansion to serving our members in the EMEA region. I’ll be working closely with Jen over the next couple of months to ensure a smooth transition.

    Read the article

  • How do I get fluent nhibernate to create a varbinary(max) field in sql server

    - by czk
    Hi, How can I get fluent nhibernate to create a varbinary field in a sql server 2005 table that uses a field size of varbinary(max)? At the moment I always get a default of varbinary(8000), which isn't big enough as i'm going to be storing image files. I've tried using CAstle.ActiveRecord but havent had any success yet. [ActiveRecord] public class MyFile : Entity { public virtual string FileName { get; set; } public virtual string FileType { get; set; } public virtual int FileVersion { get; set; } public virtual int FileLength { get; set; } [Property(ColumnType = "BinaryBlob", SqlType = "VARBINARY(MAX)")] public virtual byte[] FileData { get; set; } } Been failing at finding a solution for hours now, so thanks in advance czk

    Read the article

  • Delving into design patterns, and what that means for the Oracle user experience

    - by Kathy.Miedema
    By Kathy Miedema, Oracle Applications User Experience George Hackman, Senior Director, Applications User Experiences The Oracle Applications User Experience team has some exciting things happening around Fusion Applications design patterns. Because we’re hoping to have some new offerings soon (stay tuned with VoX to see what’s in the pipeline around Fusion Applications design patterns), now is a good time to talk more about what design patterns can do for the individual user as well as the entire company. George Hackman, Senior Director of Operations User Experience, says the first thing to note is that user experience is not just about the user interface. It’s about understanding how people do things, observing them, and then finding the patterns that emerge. The Applications UX team develops those patterns and then builds them into Oracle applications. What emerges, Hackman says, is a consistent, efficient user experience that promotes a productive workplace. Creating design patterns What is a design pattern in the context of enterprise software? “Every day, people use technology to get things done,” Hackman says. “They navigate a virtual world that reaches from enterprise to consumer apps, and from desktop to mobile. This virtual world is constantly under construction. New areas are being developed and old areas are being redone. As this world is being built and remodeled, efficient pathways and practices emerge. “Oracle's user experience team watches users navigate this world. We measure their productivity and ask them about their satisfaction. We take the most efficient, most productive pathways from the enterprise and consumer world and turn them into Oracle's user experience patterns.” Hackman describes the process as combining all of the best practices from every part of a user’s world. Members of the user experience team observe, analyze, design, prototype, and measure each work task to find the best possible pattern for a particular work flow. As the team builds the patterns, “we make sure they are fully buildable using Oracle technology,” Hackman said. “So customers know they can use these patterns. There’s no need to make something up from scratch, not knowing whether you can even build it.” Hackman says that creating something on a computer is a good example of a user experience pattern. “People are creating things all the time,” he says. “On the consumer side, they are creating documents. On the enterprise side, they are creating expense reports. On a mobile phone, they are creating contacts. They are using different apps like iPhone or Facebook or Gmail or Oracle software, all doing this creation process.” The Applications UX team starts their process by observing how people might create something. “We observe people creating things. We see the patterns, we analyze and document, then we apply them to our products. It might be different from phone to web browser, but we have these design patterns that create a consistent experience across platforms, and across products, too. The result for customers Oracle constantly improves its part of the virtual world, Hackman said. New products are created and existing products are upgraded. Because Oracle builds user experience design patterns, Oracle's virtual world becomes both more powerful and more familiar at the same time. Because of design patterns, users can navigate with ease as they embrace the latest technology – because it behaves the way they expect it to. This means less training and faster adoption for individual users, and more productivity for the business as a whole. Hackman said Oracle gives customers and partners access to design patterns so that they can build in the virtual world using the same best practices. Customers and partners can extend applications with a user experience that is comfortable and familiar to their users. For businesses that are integrating different Oracle applications, design patterns are key. The user experience created in E-Business Suite should be similar to the user experience in Fusion Applications, Hackman said. If a user is transitioning from one application to the other, it shouldn’t be difficult for them to do their work. With design patterns, it isn’t. “Oracle user experience patterns are the building blocks for the virtual world that ensure productivity, consistency and user satisfaction,” Hackman said. “They are built for the enterprise, but incorporate the best practices from across the virtual world. They empower productivity and facilitate social interaction. When you build with patterns, you get all the end-user benefits of less training / retraining from the finished product. You also get faster / cheaper development.” What’s coming? You can already access design patterns to help you build Dashboards with OBIEE here. And we promised you at the beginning that we had something in the pipeline on Fusion Applications design patterns. Look for the announcement about when they are available here on VoX.

    Read the article

  • Tomcat - virtualhosting - name / ip / port - based

    - by lisak
    Hey, what are the usage scenarios for these kinds of virtual hosting ? Name Based - typical tomcat virtual hosting, one HOME instance with many contexts, each as an individual host IP based / port based - multiple instances of tomcat ( how is it with performance and memory consuption?) running on IP aliases (virtual IPs) for one network adapter, usually behind http apache server that can run name based virtual hostings. Otherwise I can't figure out how would I forward requests in iptables/firewall based on IP address, which is just one. How is IP based virtual hosting done as to Tomcat and multiple instances ? I'd like to hear some usage scenarios from your experience. How are you running your applications. Cause there are applications having it's own modified classloader and they are developed in a way to run alone withing a tomcat instance. Then there are trivial applications which can run within one instance without problems. Many thanks

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >