Search Results

Search found 9156 results on 367 pages for 'cloud storage'.

Page 68/367 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Probability of Blade Chassis Failure

    - by ChrisZZ
    In my organisation we are thinking about buying blade servers - instead of rack servers. Of course technology vendors also make them sound very nice. A concern, that I read very often in different forums, is, that there is a theoretical possibility of the server chassis going down - which would in consequence take all the blades down. That is due to shared infrastructure. My reaction on this probability would be to have redundancy and by two chassis instead of one (very costly of course). Some people (including e.g. HP Vendors) try to convince us, that the chassis is very very unlikely to fail, due to many redundancies (redundant power supply, etc.). Another concern on my side is, that if something goes down, spare parts might be required - which is difficult in our location (Ethiopia). So I would ask to experienced administrators, that have managed blade server: What is your experience? Do they go down as a whole - and what is the sensible shared infrastructure, that might fail? That question could be extended to shared storage. Again I would say, that we need two storage units instead of only one - and again the vendors say, that this things are so rock solid, that no failure is expected. Well - I can hardly believe, that such a critical infrastructure can be very reliable without redundancy - but maybe you can tell me, whether you have successfull blade-based projects, that work without redundancy in its core parts (chassis, storage...) At the moment, we look at HP - as IBM looks much to expensive... thanks a lot best regards Christian

    Read the article

  • Linux - real-world hardware RAID controller tuning (scsi and cciss)

    - by ewwhite
    Most of the Linux systems I manage feature hardware RAID controllers (mostly HP Smart Array). They're all running RHEL or CentOS. I'm looking for real-world tunables to help optimize performance for setups that incorporate hardware RAID controllers with SAS disks (Smart Array, Perc, LSI, etc.) and battery-backed or flash-backed cache. Assume RAID 1+0 and multiple spindles (4+ disks). I spend a considerable amount of time tuning Linux network settings for low-latency and financial trading applications. But many of those options are well-documented (changing send/receive buffers, modifying TCP window settings, etc.). What are engineers doing on the storage side? Historically, I've made changes to the I/O scheduling elevator, recently opting for the deadline and noop schedulers to improve performance within my applications. As RHEL versions have progressed, I've also noticed that the compiled-in defaults for SCSI and CCISS block devices have changed as well. This has had an impact on the recommended storage subsystem settings over time. However, it's been awhile since I've seen any clear recommendations. And I know that the OS defaults aren't optimal. For example, it seems that the default read-ahead buffer of 128kb is extremely small for a deployment on server-class hardware. The following articles explore the performance impact of changing read-ahead cache and nr_requests values on the block queues. http://zackreed.me/articles/54-hp-smart-array-p410-controller-tuning http://www.overclock.net/t/515068/tuning-a-hp-smart-array-p400-with-linux-why-tuning-really-matters http://yoshinorimatsunobu.blogspot.com/2009/04/linux-io-scheduler-queue-size-and.html For example, these are suggested changes for an HP Smart Array RAID controller: echo "noop" > /sys/block/cciss\!c0d0/queue/scheduler blockdev --setra 65536 /dev/cciss/c0d0 echo 512 > /sys/block/cciss\!c0d0/queue/nr_requests echo 2048 > /sys/block/cciss\!c0d0/queue/read_ahead_kb What else can be reliably tuned to improve storage performance? I'm specifically looking for sysctl and sysfs options in production scenarios.

    Read the article

  • Installing Yaws server on Ubuntu 12.04 (Using a cloud service)

    - by Lee Torres
    I'm trying to get a Yaws web server working on a cloud service (Amazon AWS). I've compilled and installed a local copy on the server. My problem is that I can't get Yaws to run while running on either port 8000 or port 80. I have the following configuration in yaws.conf: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true This produces the following successful launch/result: Eshell V5.8.5 (abort with ^G) =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Using config file /home/ubuntu/yaws.conf =INFO REPORT==== 16-Sep-2012::17:21:06 === Ctlfile : /home/ubuntu/.yaws/yaws/default/CTL =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:8000 for <3> virtual servers: - http://domU-12-31-39-0B-1A-F6:8000 under /home/ubuntu/yaws/www/trial - =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:4443 for <1> virtual servers: - When I try to access the the url (http://ec2-72-44-47-235.compute-1.amazonaws.com), it never connects. I've tried using paping to check if port 80 or 8000 is open(http://code.google.com/p/paping/) and I get a "Host can not be resolved" error, so obviously something isn't working. I've also tried setting the yaws.conf so its at Port 80, appearing like this: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true and I get the following error: =ERROR REPORT==== 16-Sep-2012::17:24:47 === Yaws: Failed to listen 0.0.0.0:80 : {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Can't listen to socket: {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =INFO REPORT==== 16-Sep-2012::17:24:47 === application: yaws exited: {shutdown,{yaws_app,start,[normal,[]]}} type: permanent {"Kernel pid terminated",application_controller," {application_start_failure,yaws,>>>>>>{shutdown,>{yaws_app,start,[normal,[]]}}}"} I've also opened up the port 80 using iptables. Running sudo iptables -L gives this output: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- ip-192-168-2-0.ec2.internal ip-192-168-2-16.ec2.internal tcp dpt:http ACCEPT tcp -- 0.0.0.0 anywhere tcp dpt:http ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:http Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination In addition, I've gone to the security group panel in the Amazon AWS configuration area, and add ports 80, 8000, and 8080 to ip source 0.0.0.0 Please note: if you try to access the URL of the virtual server now, it likely won't connect because I'm not running currently running the yaws daemon. I've tested it when I've run yaws either through yaws or yaws -i Thanks for the patience

    Read the article

  • Windows 2003 Storage Server Hanging on Large File Transfers

    - by user25272
    In one of our offices we have a Dell PowerVault 745N NAS device which acts as the main file server. Its running 32bit Windows 2003 Storage Server SP2 with 3GB RAM. The server holds around 60 users HOME folders, which are mapped via AD. The office clients are a mix of XP SP3, Vista and Windows 7. Occasionally the server will completely hang when transferring large files. When the hang happens the console becomes unresponsive with only the mouse active and blank wallpaper. Sometimes stopping the copy frees the server, sometimes not. The hanging can last around 20 minutes. During this time other servers also become unresponsive with blank wallpaper at the console. If you do manage to get onto another server the taskbar and run commands are unresponsive. This also transcends to the client computers sometimes with explorer crashing. I'm guessing this is due to the HOME folder mapping. Eventually the NAS server with free up and everything will be back to normal. The server is configured as follows: PERC 4/DC DATA 2 - 12 SCSI HDD - RAID5 SHADOWCOPY 2 SCSI HDD - RAID1 CERC SATA DATA 11 4 SATA HDD - RAID5 OS 4 SATA HDD - RAID5 All the drivers and firmware is up to date. I've been through all the diagnostics with Dell and the hardware has come up clean including full HDD tests on the arrays. The server has NOD32 installed as the AV, but the hanging happens when it is uninstalled. There are no errors in the event log when this happens and we don't have any errors logged on any of our ProCurve switches. DNS is fine on the domain and AD from what I can tell is running happily. There are no DFS or NFS shares setup either. All the shares are standard Windows. I've unchecked the allow the computer to turn off this device to save power box under Power Management on the NIC. "Set Link Speed and Duplex to Auto-negotiate 1000 " Increased Receive Descriptors buffer from 256 to 352 (reserves more CPU resource for handling data) I've run network traces using network monitor and have found the following: 417 8.078125 {SMB:192, NbtSS:25, TCP:24, IPv4:23} 192.168.2.244 192.168.5.35 SMB SMB:R; Nt Create Andx - NT Status: System - Error, Code = (52) STATUS_OBJECT_NAME_NOT_FOUND I've tried different cabling; NICs and switch ports all with the same result. Transferring files from other servers on the domain is fine. All I haven't done is run CHKDSK on the drives to look for any file system errors. On the Vista clients I have also run netsh interface tcp set global autotuning=disabled with no result. Could it be that the server has a faulty drive or that the I/O is too much for it to handle? Any ideas why would the hang cause issues with the other servers on the LAN? Many Thanks.

    Read the article

  • Windows Azure: Announcing release of Windows Azure SDK 2.2 (with lots of goodies)

    - by ScottGu
    Earlier today I blogged about a big update we made today to Windows Azure, and some of the great new features it provides. Today I’m also excited to also announce the release of the Windows Azure SDK 2.2. Today’s SDK release adds even more great features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter The below post has more details on what’s available in today’s Windows Azure SDK 2.2 release.  Also head over to Channel 9 to see the new episode of the Visual Studio Toolbox show that will be available shortly, and which highlights these features in a video demonstration. Visual Studio 2013 Support Version 2.2 of the Window Azure SDK is the first official version of the SDK to support the final RTM release of Visual Studio 2013. If you installed the 2.1 SDK with the Preview of Visual Studio 2013 we recommend that you upgrade your projects to SDK 2.2.  SDK 2.2 also works side by side with the SDK 2.0 and SDK 2.1 releases on Visual Studio 2012: Integrated Windows Azure Sign In within Visual Studio Integrated Windows Azure Sign-In support within Visual Studio is one of the big improvements added with this Windows Azure SDK release.  Integrated sign-in support enables developers to develop/test/manage Windows Azure resources within Visual Studio without having to download or use management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer inside Visual Studio and choose the “Connect to Windows Azure” context menu option to connect to Windows Azure: Doing this will prompt you to enter the email address of the account you wish to sign-in with: You can use either a Microsoft Account (e.g. Windows Live ID) or an Organizational account (e.g. Active Directory) as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio Server Explorer (and you can start using them): With this new integrated sign in experience you are now able to publish web apps, deploy VMs and cloud services, use Windows Azure diagnostics, and fully interact with your Windows Azure services within Visual Studio without the need for a management certificate.  All of the authentication is handled using the Windows Azure Active Directory associated with your Windows Azure account (details on this can be found in my earlier blog post). Integrating authentication this way end-to-end across the Service Management APIs + Dev Tools + Management Portal + PowerShell automation scripts enables a much more secure and flexible security model within Windows Azure, and makes it much more convenient to securely manage multiple developers + administrators working on a project.  It also allows organizations and enterprises to use the same authentication model that they use for their developers on-premises in the cloud.  It also ensures that employees who leave an organization immediately lose access to their company’s cloud based resources once their Active Directory account is suspended. Filtering/Subscription Management Once you login within Visual Studio, you can filter which Windows Azure subscriptions/regions are visible within the Server Explorer by right-clicking the “Filter Services” context menu within the Server Explorer.  You can also use the “Manage Subscriptions” context menu to mange your Windows Azure Subscriptions: Bringing up the “Manage Subscriptions” dialog allows you to see which accounts you are currently using, as well as which subscriptions are within them: The “Certificates” tab allows you to continue to import and use management certificates to manage Windows Azure resources as well.  We have not removed any functionality with today’s update – all of the existing scenarios that previously supported management certificates within Visual Studio continue to work just fine.  The new integrated sign-in support provided with today’s release is purely additive. Note: the SQL Database node and the Mobile Service node in Server Explorer do not support integrated sign-in at this time. Therefore, you will only see databases and mobile services under those nodes if you have a management certificate to authorize access to them.  We will enable them with integrated sign-in in a future update. Remote Debugging Cloud Resources within Visual Studio Today’s Windows Azure SDK 2.2 release adds support for remote debugging many types of Windows Azure resources. With live, remote debugging support from within Visual Studio, you are now able to have more visibility than ever before into how your code is operating live in Windows Azure.  Let’s walkthrough how to enable remote debugging for a Cloud Service: Remote Debugging of Cloud Services To enable remote debugging for your cloud service, select Debug as the Build Configuration on the Common Settings tab of your Cloud Service’s publish dialog wizard: Then click the Advanced Settings tab and check the Enable Remote Debugging for all roles checkbox: Once your cloud service is published and running live in the cloud, simply set a breakpoint in your local source code: Then use Visual Studio’s Server Explorer to select the Cloud Service instance deployed in the cloud, and then use the Attach Debugger context menu on the role or to a specific VM instance of it: Once the debugger attaches to the Cloud Service, and a breakpoint is hit, you’ll be able to use the rich debugging capabilities of Visual Studio to debug the cloud instance remotely, in real-time, and see exactly how your app is running in the cloud. Today’s remote debugging support is super powerful, and makes it much easier to develop and test applications for the cloud.  Support for remote debugging Cloud Services is available as of today, and we’ll also enable support for remote debugging Web Sites shortly. Firewall Management Support with SQL Databases By default we enable a security firewall around SQL Databases hosted within Windows Azure.  This ensures that only your application (or IP addresses you approve) can connect to them and helps make your infrastructure secure by default.  This is great for protection at runtime, but can sometimes be a pain at development time (since by default you can’t connect/manage the database remotely within Visual Studio if the security firewall blocks your instance of VS from connecting to it). One of the cool features we’ve added with today’s release is support that makes it easy to enable and configure the security firewall directly within Visual Studio.  Now with the SDK 2.2 release, when you try and connect to a SQL Database using the Visual Studio Server Explorer, and a firewall rule prevents access to the database from your machine, you will be prompted to add a firewall rule to enable access from your local IP address: You can simply click Add Firewall Rule and a new rule will be automatically added for you. In some cases, the logic to detect your local IP may not be sufficient (for example: you are behind a corporate firewall that uses a range of IP addresses) and you may need to set up a firewall rule for a range of IP addresses in order to gain access. The new Add Firewall Rule dialog also makes this easy to do.  Once connected you’ll be able to manage your SQL Database directly within the Visual Studio Server Explorer: This makes it much easier to work with databases in the cloud. Visual Studio 2013 RTM Virtual Machine Images Available for MSDN Subscribers Last week we released the General Availability Release of Visual Studio 2013 to the web.  This is an awesome release with a ton of new features. With today’s Windows Azure update we now have a set of pre-configured VM images of VS 2013 available within the Windows Azure Management Portal for use by MSDN customers.  This enables you to create a VM in the cloud with VS 2013 pre-installed on it in with only a few clicks: Windows Azure now provides the fastest and easiest way to get started doing development with Visual Studio 2013. Windows Azure Management Libraries for .NET (Preview) Having the ability to automate the creation, deployment, and tear down of resources is a key requirement for applications running in the cloud.  It also helps immensely when running dev/test scenarios and coded UI tests against pre-production environments. Today we are releasing a preview of a new set of Windows Azure Management Libraries for .NET.  These new libraries make it easy to automate tasks using any .NET language (e.g. C#, VB, F#, etc).  Previously this automation capability was only available through the Windows Azure PowerShell Cmdlets or to developers who were willing to write their own wrappers for the Windows Azure Service Management REST API. Modern .NET Developer Experience We’ve worked to design easy-to-understand .NET APIs that still map well to the underlying REST endpoints, making sure to use and expose the modern .NET functionality that developers expect today: Portable Class Library (PCL) support targeting applications built for any .NET Platform (no platform restriction) Shipped as a set of focused NuGet packages with minimal dependencies to simplify versioning Support async/await task based asynchrony (with easy sync overloads) Shared infrastructure for common error handling, tracing, configuration, HTTP pipeline manipulation, etc. Factored for easy testability and mocking Built on top of popular libraries like HttpClient and Json.NET Below is a list of a few of the management client classes that are shipping with today’s initial preview release: .NET Class Name Supports Operations for these Assets (and potentially more) ManagementClient Locations Credentials Subscriptions Certificates ComputeManagementClient Hosted Services Deployments Virtual Machines Virtual Machine Images & Disks StorageManagementClient Storage Accounts WebSiteManagementClient Web Sites Web Site Publish Profiles Usage Metrics Repositories VirtualNetworkManagementClient Networks Gateways Automating Creating a Virtual Machine using .NET Let’s walkthrough an example of how we can use the new Windows Azure Management Libraries for .NET to fully automate creating a Virtual Machine. I’m deliberately showing a scenario with a lot of custom options configured – including VHD image gallery enumeration, attaching data drives, network endpoints + firewall rules setup - to show off the full power and richness of what the new library provides. We’ll begin with some code that demonstrates how to enumerate through the built-in Windows images within the standard Windows Azure VM Gallery.  We’ll search for the first VM image that has the word “Windows” in it and use that as our base image to build the VM from.  We’ll then create a cloud service container in the West US region to host it within: We can then customize some options on it such as setting up a computer name, admin username/password, and hostname.  We’ll also open up a remote desktop (RDP) endpoint through its security firewall: We’ll then specify the VHD host and data drives that we want to mount on the Virtual Machine, and specify the size of the VM we want to run it in: Once everything has been set up the call to create the virtual machine is executed asynchronously In a few minutes we’ll then have a completely deployed VM running on Windows Azure with all of the settings (hard drives, VM size, machine name, username/password, network endpoints + firewall settings) fully configured and ready for us to use: Preview Availability via NuGet The Windows Azure Management Libraries for .NET are now available via NuGet. Because they are still in preview form, you’ll need to add the –IncludePrerelease switch when you go to retrieve the packages. The Package Manager Console screen shot below demonstrates how to get the entire set of libraries to manage your Windows Azure assets: You can also install them within your .NET projects by right clicking on the VS Solution Explorer and using the Manage NuGet Packages context menu command.  Make sure to select the “Include Prerelease” drop-down for them to show up, and then you can install the specific management libraries you need for your particular scenarios: Open Source License The new Windows Azure Management Libraries for .NET make it super easy to automate management operations within Windows Azure – whether they are for Virtual Machines, Cloud Services, Storage Accounts, Web Sites, and more.  Like the rest of the Windows Azure SDK, we are releasing the source code under an open source (Apache 2) license and it is hosted at https://github.com/WindowsAzure/azure-sdk-for-net/tree/master/libraries if you wish to contribute. PowerShell Enhancements and our New Script Center Today, we are also shipping Windows Azure PowerShell 0.7.0 (which is a separate download). You can find the full change log here. Here are some of the improvements provided with it: Windows Azure Active Directory authentication support Script Center providing many sample scripts to automate common tasks on Windows Azure New cmdlets for Media Services and SQL Database Script Center Windows Azure enables you to script and automate a lot of tasks using PowerShell.  People often ask for more pre-built samples of common scenarios so that they can use them to learn and tweak/customize. With this in mind, we are excited to introduce a new Script Center that we are launching for Windows Azure. You can learn about how to scripting with Windows Azure with a get started article. You can then find many sample scripts across different solutions, including infrastructure, data management, web, and more: All of the sample scripts are hosted on TechNet with links from the Windows Azure Script Center. Each script is complete with good code comments, detailed descriptions, and examples of usage. Summary Visual Studio 2013 and the Windows Azure SDK 2.2 make it easier than ever to get started developing rich cloud applications. Along with the Windows Azure Developer Center’s growing set of .NET developer resources to guide your development efforts, today’s Windows Azure SDK 2.2 release should make your development experience more enjoyable and efficient. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • What's new in the RightNow November 2012 release?

    - by Richard Lefebvre
    What new in the RightNow November 2012? In order to find out, please watch this tutorial with imbedded demonstration or read the November 2012 Release notes.   News Facts The November 2012 release of     Oracle’s RightNow CX Cloud Service marks the completion of development efforts for 2012 and continues Oracle’s commitment to enhancing the Oracle RightNow offering following the acquisition. New release delivers key capabilities designed to help organizations improve customer experiences in order to increase customer acquisition and retention, while reducing total cost of ownership. Part of the Oracle Cloud, Oracle RightNow CX Cloud Service now integrates Oracle RightNow Chat Cloud Service with Oracle Engagement Engine Cloud Service, helping organizations intelligently and proactively engage with customers through the right channel at the right time. Chat solutions have emerged as an important component of a cross-channel customer experience strategy. According to Forrester Research, Inc., chat adoption has risen dramatically between 2009 and 2011 from 19% to 37%, and it has the highest satisfaction level of all customer service channels at 62% satisfaction. (*) To help companies deliver enhanced customer experiences, Oracle has made significant investments in Oracle RightNow Chat Cloud Service throughout 2012. With the addition of rules-based engagement to existing capabilities such as co-browse, mobile chat, and cross-channel knowledge integration with the contact center, all delivered via the cloud, Oracle RightNow Chat Cloud Service is differentiated as the industry-leading chat solution. The Oracle Cloud offers a broad portfolio of software as-a-service applications, including Oracle Customer Service and Support Cloud Service, which is based on the Oracle RightNow CX Cloud Service. New Capabilities Key Oracle RightNow Chat Cloud Service and other cross-channel capabilities include: Chat Business Rules, with over 70 built-in rule conditions, leverage the Oracle Engagement Engine to help enable organizations capture rich visitor data and invoke complex actions and triggers. Chat Business Rules allow granular control over when to engage a customer via the chat channel based on customer behavior, customer profile information and operational information. Click-to-Call provides the option for a customer to engage with a live agent over the phone during the Web browsing experience. Chat Availability Controls provide organizations with the ability to throttle volume through the chat channel based on real-time agent availability and wait time thresholds. This ability to manage the channel more efficiently allows organizations to provide a better experience to customers using the chat channel. Strategic and Operational Chat Channel Analytics provide better insight into channel and agent productivity and utilization and effectiveness with both out-of-the-box reports and ad hoc reports. New chat channel analytics provide comprehensive metrics with full data transparency. Background Service Updates improve high availability metrics for Oracle RightNow Chat Cloud Service during service update periods, setting the industry leading standard for sales and service delivery to customers via the chat channel. Additional Capabilities include: Improved Web developer tools for more efficient self-service user interface design Improved administration for enhanced user sessions management Increased cross-channel community collaboration Enhanced extensibility widgets and syndication management Streamlined content management and analytics capabilities Read the full announcement here

    Read the article

  • MySQL index cardinality - performance vs storage efficiency

    - by Sean
    Say you have a MySQL 5.0 MyISAM table with 100 million rows, with one index (other than primary key) on two integer columns. From my admittedly poor understanding of B-tree structure, I believe that a lower cardinality means the storage efficiency of the index is better, because there are less parent nodes. Whereas a higher cardinality means less efficient storage, but faster read performance, because it has to navigate through less branches to get to whatever data it is looking for to narrow down the rows for the query. (Note - by "low" vs "high", I don't mean e.g. 1 million vs 99 million for a 100 million row table. I mean more like 90 million vs 95 million) Is my understanding correct? Related question - How does cardinality affect write performance?

    Read the article

  • WPF: isolated storage file path too long

    - by user342961
    Hi, I'm deploying my WPF app with ClickOnce. When developing locally in Visual Studio, I store files in the isolated storage by calling IsolatedStorageFile.GetUserStoreForDomain(). This works just fine and the generated path is C:\Users\Frederik\AppData\Local\IsolatedStorage\phqduaro.crw\hux3pljr.cnx\StrongName.kkulk3wafjkvclxpwvxmpvslqqwckuh0\Publisher.ui0lr4tpq53mz2v2c0uqx21xze0w22gq\Files\FilerefData\-581750116 (189 chars) But when I deploy my app with ClickOnce, the generated path becomes too long, resulting in a DirectoryNotFoundException when creating the isolated storage directory. The generated path with ClickOnce is: C:\Users\Frederik\AppData\Local\Apps\2.0\Data\OQ0LNXJT.R5V\8539ABHC.ODN\exqu..tion_e07264ceafd7486e_0001.0000_b8f01b38216164a0\Data\StrongName.wy0cojdd3mpvq45404l3gxdklugoanvi\Publisher.ui0lr4tpq53mz2v2c0uqx21xze0w22gq\Files\FilerefData\-581750116 (247 chars) When I browse the folders all but the last directory of the path exists. Then when trying to create a folder at this location windows tells me I can't create a directory because the resulting path name will be too long. How can I shorten the path generated by the IsolatedStorage?

    Read the article

  • Cloud Agnostic Architecture?

    - by Dave
    Hi, I'm doing some architecture work on a new solution which will initially run in Windows Azure. However I'd like the solution (or at least the architecture/design) to be Cloud Agnostic (to whatever extent is realistic). Has anyone done any work on this front or seen any good white papers/blog posts? Our highlevel architecture will consist of a payload being sent to a web service (WCF for instance), this will be dumped on a queue (for arguments sake) and a worker process will grab messages off this queue and proccess them. There will be a database of customer information which we'd ideally like to keep out of the cloud however there are obvious performance considerations. Keen to hear other's thoughts. Cheers Dave

    Read the article

  • Cloud e-mail and portal integration: experiences?

    - by Mark McLaren
    I am evaluating cloud e-mail solutions based upon: Google Apps for Education Microsoft Live@edu I work for a University and we currently have an institutional portal (based on uPortal). We currently have our local IMAP server and webmail client fully integrated with the portal. We would like to replicate the current portal e-mail experience with the new e-mail services. At present users can see a snapshot of their inbox in the portal and click through into the appropriate place in the webmail client. We expect that we need to solve similar problems when integrating with the cloud based e-mail solutions. We need to solve the single sign-on (SSO) problem. We need to be able to access the inbox messages on the users behalf. (e.g. proxy authentication) Does anybody have an experience or advice on this? Many thanks, Mark

    Read the article

  • Zend_Auth / Zend_Session error and storing objects in Auth Storage

    - by Martin
    Hi All, I have been having a bit of a problem with Zend_Auth and keep getting an error within my Acl. Within my Login Controller I setup my Zend_Auth storage as follows $auth = Zend_Auth::getInstance(); $result = $auth->authenticate($adapter); if ($result->isValid()) { $userId = $adapter->getResultRowObject(array('user_id'), null)->user_id; $user = new User_Model_User; $users = new User_Model_UserMapper; $users->find($userId, $user); $auth->getStorage()->write( $user ); } This seems to work well and I am able to use the object stored in the Zend_Auth storage within View Helpers without any problems. The problem that I seem to be having is when I try to use this within my Acl, below is a snippet from my Acl, as soon as it gets to the if($auth->hasIdentity()) { line I get the exception detailed further down. The $user->getUserLevel() is a methord within the User Model that allows me to convert the user_level_id that is stored in the database to a meaning full name. I am assuming that the auto loader sees these kind of methords and tries to load all the classes that would be required. When looking at the exception it appears to be struggling to find the class as it is stored in a module, I have the Auto Loader Name Space setup in my application.ini. Could anyone help with resolving this? class App_Controller_Plugin_Acl extends Zend_Controller_Plugin_Abstract { protected $_roleName; public function __construct() { $auth = Zend_Auth::getInstance(); if($auth->hasIdentity()) { $user = $auth->getIdentity(); $this->_roleName = strtolower($user->getUserLevel()); } else { $this->_roleName = 'guest'; } } } Fatal error: Uncaught exception 'Zend_Session_Exception' with message 'Zend_Session::start() - \Web\library\Zend\Loader.php(Line:146): Error #2 include_once() [&lt;a href='function.include'&gt;function.include&lt;/a&gt;]: Failed opening 'Menu\Model\UserLevel.php' for inclusion (include_path='\Web\application/../library;\Web\library;.;C:\php5\pear') Array' in \Web\library\Zend\Session.php:493 Stack trace: #0 \Web\library\Zend\Session\Namespace.php(143): Zend_Session::start(true) #1 \Web\library\Zend\Auth\Storage\Session.php(87): Zend_Session_Namespace-&gt;__construct('Zend_Auth') #2 \Web\library\Zend\Auth.php(91): Zend_Auth_Storage_Session-&gt;__construct() #3 \Web\library\Zend\A in \Web\library\Zend\Session.php on line 493 Thanks, Martin

    Read the article

  • jQuery Cloud Zoom Plugin and image zoom being actived by mouseenter or mouseover

    - by masimao
    I am using the jQuery Cloud Zoom Plugin and i need to add the "mouseenter" or "mouseover" event to activate the zoom when the user put the mouse over the thumbs. So i have made this change in the line 359 of the file cloud-zoom.1.0.2.js (Version 1.0.2) $(this).bind('mouseenter click', $(this) ... It works, but if i pass the mouse quickly over the thumbs the "Loading" text doesn't disapear. Someone knows what i have to change to solve this? Thanks for any help!

    Read the article

  • Convert ASP.NET membership system to secure password storage

    - by wrburgess
    I have a potential client that set up their website and membership system in ASP.NET 3.5. When their developer set up the system, it seems he turned off the security/hashing aspect of password storage and everything is stored in the clear. Is there a process to reinstall/change the secure password storage of ASP.NET membership without changing all of the passwords in the database? The client is worried that they'll lose their customers if they all have to go through a massive password change. I've always installed with security on by default, thus I don't know the effect of a switchover. Is there a way to convert the entire system to a secure password system without major effects on the users?

    Read the article

  • Silverlight Isolated Storage and loading big files

    - by Thomas Joulin
    In a Windows Phone 7 application, I would like to query a big XML file (list of cities) stored using Isolated Storage. If I do that this way, will the file be loaded to memory ( 5 mo) ? If so, what other solution do I have? Edit: More details. I want to use AutoCompleteBox (http://www.jeff.wilcox.name/2008/10/introducing-autocompletebox/), but instead of using a web service (this is fixed data, no need to be online), I want to query a file/database/isolated storage... I have a fixed list of cities. I said in the comments it's 40k, but it finally seems closer to 1k rows.

    Read the article

  • Switching from one connectionstring to another when moving from development to cloud

    - by Nancy Walker
    Hello, I am working on a cloud application. When I test out the application on my computer I want to have my connection string set as follows in ServiceConfiguration.cscfg: <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" /> When I publish to the cloud I need to have it set as follows: <Setting name="DataConnectionString" value="DefaultEndpointsProtocol=https;AccountName=xxxx;AccountKey=yyy" /> I keep going from one environment to the other and keep having to change the DataConnectionString. Is there a way that I can automate this? I looked around and can't see any examples but I'm sure some others have the same problem as me. Thanks, Nancy

    Read the article

  • RSS Feed is giving error in cloud

    - by stackuser1
    In my C# asp.net 3.5 application I am using RSS Feed to get current updates of my website. Its working fine and when we subscribe the feed also its updating the data as needed. Now our application is deployed in cloud. There also this RSS feed is opening and showing the data. But When I say Subscribe to this feed Its giving diagnose error page saying Internet Explorer Can not Display this page. Let me know how to work with RSS feed in cloud environment.

    Read the article

  • How to deploy App_Data files with Azure cloud service (web role)

    - by user2977157
    I have a read-only data file (for IP geolocation) that my web role needs to read. It is currently in the App_Data folder, which is NOT included in the deployment package for the cloud service. Unlike "web deploy", there is no checkbox for an azure cloud service deployment to include/exclude App_Data. Is there a reasonable way to get the deployment package to include the App_Data folder/files? Or is using Azure storage for this sort of thing the better way to go? (cost and performance wise) Am using Visual Studio 2013 and the Azure SDK 2.2

    Read the article

  • Moving webshop storage to NoSQL solution

    - by mare
    If you had a webshop solution based on SQL Server relational DB what would be the reasons, if any, to move to NoSQL storage ? Does it even make sense to migrate datastores that rely on relations heavily to NoSQL? If starting from scratch, would you choose NoSQL solution over relational one for a webshop project, which will, after a while, again end up with a bunch of tables like Articles, Classifications, TaxRates, Pricelists etc. and a magnitude of relations between them? What's the support like in .NET (4.0) for MongoDB or MongoDB's support for .NET 4.0? Can I count on rich code generation tools similar to EF wizard, L2SQL wizard etc. for MongoDB? Because as what I have read so far, NoSQL's are mostly suited for document storage, simpler object models. Your answer to this question will help me make the right infrastructure design decisions.

    Read the article

  • MySQL storage engine dilemma

    - by burntblark
    There are two MySQL database features that I want to use in my application. The first is FULL-TEXT-SEARCH and TRANSACTIONS. Now, the dilemma here is that I cannot get this feature in one storage engine. It's either I use MyIsam (which has the FULL-TEXT-SEARCH feature) or I use InnoDB (which supports the TRANSACTION feature). I can't have both. My question is, is there anyway I can have both features in my application before I am forced to make a choice between the two storage engines.

    Read the article

  • Rapid Repository – Silverlight Development

    - by SeanMcAlinden
    Hi All, One of the questions I was recently asked was whether the Rapid Repository would work for normal Silverlight development as well as for the Windows 7 Phone. I can confirm that the current code in the trunk will definitely work for both the Windows 7 Phone and normal Silverlight development. I haven’t tested V.1.0 for compatibility but V2.0 which will be released fairly soon will work absolutely fine.   Kind Regards, Sean McAlinden.

    Read the article

  • Windows 7 Phone Database Rapid Repository – V2.0 Beta Released

    - by SeanMcAlinden
    Hi All, A V2.0 beta has been released for the Windows 7 Phone database Rapid Repository, this can be downloaded at the following: http://rapidrepository.codeplex.com/ Along with the new View feature which greatly enhances querying and performance, various bugs have been fixed including a more serious bug with the caching that caused the GetAll() method to sometimes return inconsistent results (I’m a little bit embarrased by this bug). If you are currently using V1.0 in development, I would recommend swapping in the beta immediately. A full release will be available very shortly, I just need a few more days of testing and some input from other users/testers.   *Breaking Changes* The only real change is the RapidContext has moved under the main RapidRepository namespace. Various internal methods have been actually made ‘internal’ and replaced with a more friendly API (I imagine not many users will notice this change). Hope you like it Kind Regards, Sean McAlinden

    Read the article

  • Windows 7 Phone Database – Querying with Views and Filters

    - by SeanMcAlinden
    I’ve just added a feature to Rapid Repository to greatly improve how the Windows 7 Phone Database is queried for performance (This is in the trunk not in Release V1.0). The main concept behind it is to create a View Model class which would have only the minimum data you need for a page. This View Model is then stored and retrieved rather than the whole list of entities. Another feature of the views is that they can be pre-filtered to even further improve performance when querying. You can download the source from the Microsoft Codeplex site http://rapidrepository.codeplex.com/. Setting up a view Lets say you have an entity that stores lots of data about a game result for example: GameScore entity public class GameScore : IRapidEntity {     public Guid Id { get; set; }     public string GamerId {get;set;}     public string Name { get; set; }     public Double Score { get; set; }     public Byte[] ThumbnailAvatar { get; set; }     public DateTime DateAdded { get; set; } }   On your page you want to display a list of scores but you only want to display the score and the date added, you create a View Model for displaying just those properties. GameScoreView public class GameScoreView : IRapidView {     public Guid Id { get; set; }     public Double Score { get; set; }     public DateTime DateAdded { get; set; } }   Now you have the view model, the first thing to do is set up the view at application start up. This is done using the following syntax. View Setup public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score }); } As you can see, using a little bit of lambda syntax, you put in the code for constructing a single view, this is used internally for mapping an entity to a view. *Note* you do not need to map the Id property, this is done automatically, a view model id will always be the same as it’s corresponding entity.   Adding Filters One of the cool features of the view is that you can add filters to limit the amount of data stored in the view, this will dramatically improve performance. You can add multiple filters using the fluent syntax if required. In this example, lets say that you will only ever show the scores for the last 10 days, you could add a filter like the following: Add single filter public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10)); } If you wanted to further limit the data, you could also say only scores above 100: Add multiple filters public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10))         .AddFilter(x => x.Score > 100); }   Querying the view model So the important part is how to query the data. This is done using the repository, there is a method called Query which accepts the type of view as a generic parameter (you can have multiple View Model types per entity type) You can either use the result of the query method directly or perform further querying on the result is required. Querying the View public void DisplayScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> scores = repository.Query<GameScoreView>();       // display logic } Further Filtering public void TodaysScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> todaysScores = repository.Query<GameScoreView>().Where(x => x.DateAdded > DateTime.Now.AddDays(-1)).ToList();       // display logic }   Retrieving the actual entity Retrieving the actual entity can be done easily by using the GetById method on the repository. Say for example you allow the user to click on a specific score to get further information, you can use the Id populated in the returned View Model GameScoreView and use it directly on the repository to retrieve the full entity. Get Full Entity public void GetFullEntity(Guid gameScoreViewId) {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     GameScore fullEntity = repository.GetById(gameScoreViewId);       // display logic } Synchronising The View If you are upgrading from Rapid Repository V1.0 and are likely to have data in the repository already, you will need to perform a synchronisation to ensure the views and entities are fully in sync. You can either do this as a one off during the application upgrade or if you are a little more cautious, you could run this at each application start up. Synchronise the view public void MyUpgradeTasks() {     RapidRepository<GameScore>.SynchroniseView<GameScoreView>(); } It’s worth noting that in normal operation, the view keeps itself in sync with the entities so this is only really required if you are upgrading from V1.0 to V2.0 when it gets released shortly.   Summary I really hope you like this feature, it will be great for performance and I believe supports good practice by promoting the use of View Models for specific pages. I’m hoping to produce a beta for this over the next few days, I just want to add some more tests and hopefully iron out any bugs. I would really appreciate any thoughts on this feature and would really love to know of any bugs you find. You can download the source from the following : http://rapidrepository.codeplex.com/ Kind Regards, Sean McAlinden.

    Read the article

  • Ubuntu 12.04 on Amazon EC2: /dev/xvda1 will be checked for errors at next reboot?

    - by cwd
    I'm running the lastest Ubuntu 12.04 AMI (ami-a29943cb) from Canonical on Amazon EC2 and quite often when I log in I get the message: *** /dev/xvda1 will be checked for errors at next reboot *** I have read a bunch of documentation on this and seem to understand that every so many reboots (around 37 see Mount count / Maximum mount count below) Ubuntu wants to check a disk for errors. I can see that by using dumpe2fs -h /dev/xvda1 (reference) to get information such as: Last mounted on: / Filesystem UUID: 1ad27d06-4ecf-493d-bb19-4710c3caf924 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 524288 Block count: 2097152 Reserved block count: 104857 Free blocks: 1778055 Free inodes: 482659 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 511 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Tue Apr 24 03:07:48 2012 Last mount time: Thu Nov 8 03:17:58 2012 Last write time: Tue Apr 24 03:08:52 2012 Mount count: 3 Maximum mount count: 37 Last checked: Tue Apr 24 03:07:48 2012 Check interval: 15552000 (6 months) Next check after: Sun Oct 21 03:07:48 2012 Lifetime writes: 2454 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 0a25e04c-6169-4d68-bfa6-a1acd8e39632 Journal backup: inode blocks Journal features: journal_incompat_revoke Journal size: 128M Journal length: 32768 Journal sequence: 0x0000158b Journal start: 1 I've tried these things to get rid of the message and usually the badblocks is what does it for me: Run this command and reboot: sudo touch /forcefsck Run badblocks to check the disk: badblocks /dev/sda1 Edit /etc/fstab and change the last "0" which is the fs_passno column accordingly and then reboot: The root filesystem should be specified with a fs_passno of 1, and other filesystems should have a fs_passno of 2. I don't understand: If this is a virtual drive shouldn't it be less prone to errors? Was the image created with one of the flags set? If not what is triggering it? Why is fs_passno set to 0 on Amazon EC2 Ubuntu images? This is not the first one that is like this.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >