Search Results

Search found 4126 results on 166 pages for 'lost'.

Page 29/166 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • ORA- 01157 / Cant connect to database

    - by Tom
    Hi everyone, this is a follow up from this question. Let me start by saying that i am NOT a DBA, so i'm really really lost with this. A few weeks ago, we lost contact with one of our SID'S. All the other services are working, but this one in particular is not. What we got was this message when trying to connect ORA-01033: ORACLE initialization or shutdown in progress An attempt to alter database open ended up in ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' I tried to shutdown / restart the database, but got this message. Total System Global Area 566231040 bytes Fixed Size 1220604 bytes Variable Size 117440516 bytes Database Buffers 444596224 bytes Redo Buffers 2973696 bytes Database mounted. ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' When all continued the same, I erased the dbf files (rm xxx_data.dbf xxx_index.dbf), and recreated them using touch xxx_data.dbf. I also tried to recreate the tablespaces using `CREATE TABLESPACE DATA DATAFILE XXX_DATA.DBF` and got Database not open As I said, i don't know how bad this is, or how far i'm from gaining access to my database (well, to this SID at least, the others are working). I would imagine that a last resource would be to throw everything away, and recreating it, but I don't know how to, and I was hoping there's a less destructive solution. Any help will be greatly appreciated . Thanks in advance.

    Read the article

  • Extract 2 numbers preceded with two different strings in a paragrapf using TCL Regular Expression

    - by Madhu
    Hi, I need to extract two different numbers preceded by two different strings. Employee Id-- Employee16(I need 16) and Employee links-- Employee links:2 (I need 2). Source String looks like following: Employee16, Employee name is QueenRose Working for 46w0d Billing is Distributed 65537 assigned tasks, 0 reordered, 0 unassigned 0 discarded, 0 lost received, 5/255 load received sequence unavailable, 0xC2E7 sent sequence Employee links: 2 active, 0 inactive (max not set, min not set) Dt3/5/10:0, since 46w0d, no tasks pending Dt3/5/10:10, since 21w0d, no tasks rcvd Employee is currently working in Hardware section. Employee19, Employee name is Edward11 Working for 48w4d Billing is Distributed 206801498 assigned tasks, 0 reordered, 0 unassigned 655372 discarded, 0 lost received, 9/255 load received sequence unavailable, 0x23CA sent sequence Employee links: 7 active, 0 inactive (max not set, min not set) Dt3/5/10:0, since 47w2d, tasks pending Dt3/5/10:10, since 28w6d, no tasks pending Dt3/5/10:11, since 18w4d, no tasks pending Dt3/5/10:12, since 18w4d, no tasks pending Dt3/5/10:13, since 18w4d, no tasks pending Dt3/5/10:14, since 18w4d, no tasks pending Dt3/5/10:15, since 7w2d, no tasks pending Employee is currently working in Hardware sectione. Employee6 (inactive) Employee links: 2 Dt3/5/10:0 (inactive) Dt3/5/10:10 (inactive) Employee7 (inactive) Employee links: 2 Dt3/5/10:0 (inactive) Dt3/5/10:10 (inactive) Tried with the following: Multilink(\d+)[^\n\r]*[^M]*Member links:\s+(\d+) But is not listing all the Ids and links. Can anybody help me getting this? Thanks in advance, Madhu.

    Read the article

  • Text piped to PowerShell.exe isn't recieved when using [Console]::ReadLine()

    - by crtracy
    I'm getting itermittent data loss when calling .NET [Console]::ReadLine() to read piped input to PowerShell.exe: >ping localhost | powershell -NonInteractive -NoProfile -C "do {$line = [Console]::ReadLine(); ('' + (Get-Date -f 'HH:mm :ss') + $line) | Write-Host; } while ($line -ne $null)" 23:56:45time<1ms 23:56:45 23:56:46time<1ms 23:56:46 23:56:47time<1ms 23:56:47 23:56:47 Normally 'ping localhost' from Vista64 looks like this, so there is a lot of data missing from the output above: Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Ping statistics for ::1: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms But using the same API from C# recieves all the data sent to the process (excluding some newline differences). Code: namespace ConOutTime { class Program { static void Main (string[] args) { string s; while ((s = Console.ReadLine ()) != null) { if (s.Length > 0) // don't write time for empty lines Console.WriteLine("{0:HH:mm:ss} {1}", DateTime.Now, s); } } } } Output: 00:44:30 Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: 00:44:30 Reply from ::1: time<1ms 00:44:31 Reply from ::1: time<1ms 00:44:32 Reply from ::1: time<1ms 00:44:33 Reply from ::1: time<1ms 00:44:33 Ping statistics for ::1: 00:44:33 Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), 00:44:33 Approximate round trip times in milli-seconds: 00:44:33 Minimum = 0ms, Maximum = 0ms, Average = 0ms So, if calling the same API from PowerShell instead of C# many parts of StdIn get 'eaten'. Is the PowerShell host reading string from StdIn even though I didn't use 'PowerShell.exe -Command -'?

    Read the article

  • Garbled text when constructing emails with vmime

    - by Klaus Fiedler
    Hey, my Qt C++ program has a part where it needs to send the first 128 characters or so of the output of a bash command to an email address. The output from the tty is captured in a text box in my gui called textEdit_displayOutput and put into my message I built using the Message Builder ( the object m_vmMessage ) Here is the relevant code snippet: m_vmMessage.getTextPart()->setCharset( vmime::charsets::US_ASCII ); m_vmMessage.getTextPart()->setText( vmime::create < vmime::stringContentHandler > ( ui->textEdit_displayOutput->toPlainText().toStdString() ) ); vmime::ref < vmime::message > msg = m_vmMessage.construct(); vmime::utility::outputStreamAdapter out( std::cout ); msg->generate( out ); Giving bash 'ls /' and a newline makes vmime give terminal output like this: ls /=0Abin etc=09 initrd.img.old mnt=09 sbin=09 tmp=09 vmlinuz.o= ld=0Aboot farts=09 lib=09=09 opt=09 selinux usr=0Acdrom home=09 = lost+found=09 proc srv=09 var=0Adev initrd.img media=09 root = Whereas it should look more like this: ls / bin etc initrd.img.old mnt sbin tmp vmlinuz.old boot farts lib opt selinux usr cdrom home lost+found proc srv var dev initrd.img media root sys vmlinuz 18:22> How do I encode the email properly? Does vmime just display it like that on purpose and the actual content of the email is ok?

    Read the article

  • Zend_Auth and database SaveHandler

    - by takeshin
    I have created Zend_Auth adapter implementing Zend_Auth_Adapter_Interface (similar to Pádraic's adapter) and created simple ACL plugin. Everything works fine with default session handler. So far, so good. As a next step I have created custom Session SaveHandler to persist session data in the database. My implementation is very similar to this one from parables-demo. Seems that everything is working fine. Session data are properly saved to the database, session objects are serialized, but authentication does not work when I enable this custom SaveHandler. I have debugged the authentication and all works fine up till the next request, when the authentication data are lost. I suspected, that is has something to do with the fact, that I use $adapter->write($object) instead $adapter->write($string), but the same happens with strings. I'm bootstrapping Zend_Application_Resource_Session in the first Bootstrap method, as early as possible. Does Zend_Auth need any extra configuration to persist data in the database? Why the authentity is being lost?

    Read the article

  • Sorting and Pagination does not work after I build a custom keyword search that is build using relat

    - by Roland
    I recently started to build a custom keyword search using Yii 1.1.x The search works 100%. But as soon as I sort the columns and use pagination in the admin view the search gets lost and all results are shown. So with otherwords it's not filtering so that only the search results show. Somehow it resets it. In my controller my code looks as follows $builder=Messages::model()->getCommandBuilder(); //Table1 Columns $columns1=array('0'=>'id','1'=>'to','2'=>'from','3'=>'message','4'=>'error_code','5'=>'date_send'); //Table 2 Columns $columns2=array('0'=>'username'); //building the Keywords $keywords = explode(' ',$_REQUEST['search']); $count=0; foreach($keywords as $key){ $kw[$count]=$key; ++$count; } $keywords=$kw; $condition1=$builder->createSearchCondition(Messages::model()->tableName(),$columns1,$keywords,$prefix='t.'); $condition2=$builder->createSearchCondition(Users::model()->tableName(),$columns2,$keywords); $condition = substr($condition1,0,-1) . " OR ".substr($condition2,1); $condition = str_replace('AND','OR',$condition); $dataProvider=new CActiveDataProvider('Messages', array( 'pagination'=>array( 'pageSize'=>self::PAGE_SIZE, ), 'criteria'=>array( 'with'=>'users', 'together'=>true, 'joinType'=>'LEFT JOIN', 'condition'=>$condition, ), 'sort'=>$sort, )); $this->render('admin',array( 'dataProvider'=>$dataProvider,'keywords'=>implode(' ',$keywords),'sort'=>$sort )); and my view looks like this $this->widget('zii.widgets.grid.CGridView', array( 'dataProvider'=>$dataProvider, 'columns'=>array( 'id', array( 'name'=>'user_id', 'value'=>'CHtml::encode(Users::model()->getReseller($data->user_id))', 'visible'=>Yii::app()->user->checkAccess('poweradministrator') ), 'to', 'from', 'message', /* 'date_send', */ array( 'name'=>'error_code', 'value'=>'CHtml::encode($data->status($data->error_code))', ), array( 'class'=>'CButtonColumn', 'template'=>'{view} {delete}', ), ), )); I really do not know what do do anymore since I'm terribly lost, any help will be hihsly appreciated

    Read the article

  • Android Layout + Programming

    - by MD
    I'm trying to create a "scrollable" layout in Android. Even using developers.android.com, though, I feel a little bit lost at the moment. I'm somewhat new to Java, but not so much that I feel I should be having these issues--being new to Android is the bigger problem right now. The layout I'm trying to create should scroll in a sort of a "grid". I THINK what I'm looking for is the Gallery view, but I'm really lost as to how to implement it at the moment. I want it to "snap" to center the frame, like in the actual Gallery application. Essentially, if I had a photo gallery of 9 pictures, the idea is to scroll between them up/down AND side to side, in a 3x3 manner. Doesn't need to dynamically adjust, or anything like that, I just want a grid I can scroll through. I'm also not asking for anyone to give me explicit code for it--I'm trying to learn, more than anything. But pointing me in the right direction for helpful layout programming resources would be greatly appreciated, and confirming if it's a Gallery view I'm looking for would also be really helpful. Thanks in advance.

    Read the article

  • NetBeans Platform - how to refresh the property sheet view of a node?

    - by I82Much
    Hi all, I am using the PropertySheetView component to visualize and edit the properties of a node. This view should always reflect the most recent properties of the object; if there is a change to the object in another process, I want to somehow refresh the view and see the updated properties. The best way I was able to do this is something like the following (making use of EventBus library to publish and subscribe to changes in objects): public DomainObjectWrapperNode(DomainObject obj) { super (Children.LEAF, Lookups.singleton(obj)); EventBus.subscribe(DomainObject.class, this); } public void onEvent(DomainObject event) { // Do a check to determine if the updated object is the one wrapped by this node; // if so fire a property sets change firePropertySetsChange(null, this.getPropertySets()); } This works, but my place in the scrollpane is lost when the sheet refreshes; it resets the view to the top of the list and I have to scroll back down to where I was before the refresh action. So my question is, is there a better way to refresh the property sheet view of a node, specifically so my place in the property list is not lost upon refresh?

    Read the article

  • MySQL help, counting information on last records

    - by ee12csvt
    I need some advice I have two tables, one holds unique serial numbers of items (items) and the other holds status changes and other information for these items (details) The Tables are set up as follows Item itemID itemName itemDate details detID itemID modlvl status detDate All items have at least one record in the details table, but over time the status has changed or the modification level has changed (Both of these are identified by numbers which are held in other appropriate tables) and a new record is created each time the status/modlvl changes I want to display a table on my webpage using php that identifies the different mod levels of the items and shows a count of each of the current status of the items EDIT Hi Ronnis, This is an example of the data in the tables and what I want to achieve The current Mod Levels range from 1 to 3 Status representations are 1 In Use 2 In Store 3 Being repaired 4 In Transit 5 For Disposal 6 Disposed 7 Lost Item itemID OrigMod created 1000 1 2009-10-01 22:12:12 1001 1 2009-10-01 22:12:12 1002 1 2009-10-01 22:12:12 1003 1 2009-10-01 22:12:12 1004 1 2009-10-01 22:12:12 1005 1 2009-10-01 22:12:12 1006 1 2009-10-01 22:12:12 1007 1 2009-10-01 22:12:12 1008 1 2009-10-01 22:12:12 1009 1 2009-10-01 22:12:12 1010 1 2009-10-01 22:12:12 Details detID itemID modlvl detDate status 1 1000 1 2009-10-01 1 2 1001 1 2009-10-01 1 3 1002 1 2009-10-01 1 4 1003 1 2009-10-01 1 5 1004 1 2009-10-01 1 6 1005 1 2009-10-01 1 7 1006 1 2009-10-01 1 8 1007 1 2009-10-01 1 9 1008 1 2009-10-01 1 10 1009 1 2009-10-01 1 11 1010 1 2009-10-01 1 12 1001 1 2010-02-01 2 13 1001 1 2010-02-03 4 14 1001 1 2010-03-01 3 15 1000 1 2010-03-14 2 16 1001 2 2010-04-01 4 17 1006 1 2010-04-01 2 18 1001 2 2010-04-03 2 19 1006 1 2010-04-14 4 20 1006 1 2010-05-01 5 21 1002 1 2010-05-02 2 22 1003 1 2010-05-10 2 23 1010 1 2010-06-01 2 24 1006 1 2010-06-18 6 25 1010 1 2010-07-01 7 26 1007 1 2010-07-02 2 27 1007 1 2010-07-04 4 28 1003 1 2010-07-10 2 29 1007 1 2010-07-11 3 30 1007 2 2010-07-12 4 31 1007 2 2010-07-15 2 32 1001 2 2010-08-31 1 33 1001 2 2010-09-10 2 34 1001 2 2010-10-01 4 35 1008 1 2010-10-01 2 36 1001 2 2010-10-05 3 37 1008 1 2010-10-05 4 38 1008 1 2010-10-10 3 39 1001 3 2010-10-20 4 40 1001 3 2010-10-25 2 Using the tables above I want to get this result MoLvl Use Store Repd Transit Displ Dispd Lost Total 1 3 3 1 0 0 1 1 9 2 0 1 0 0 0 0 0 1 3 0 1 0 0 0 0 0 1 Total 3 5 1 0 0 1 1 11

    Read the article

  • How do I use cookies to store users' recent site history(PHP)?

    - by ggfan
    I decided to make a recent view box that allows users to see what links they clicked on before. Whenever they click on a posting, the posting's id gets stored in a cookie and displays it in the recent view box. In my ad.php, I have a definerecentview function that stores the posting's id (so I can call it later when trying to get the posting's information such as title, price from the database) in a cookie. How do I create a cookie array for this? **EXAMPLE:** user clicks on ad.php?posting_id='200' //this is in the ad.php function definerecentview() { $posting_id=$_GET['posting_id']; //this adds 30 days to the current time $Month = 2592000 + time(); $i=1; if (isset($posting_id)){ //lost here for($i=1,$i< ???,$i++){ setcookie("recentviewitem[$i]", $posting_id, $Month); } } } function displayrecentviews() { echo "<div class='recentviews'>"; echo "Recent Views"; if (isset($_COOKIE['recentviewitem'])) { foreach ($_COOKIE['recentviewitem'] as $name => $value) { echo "$name : $value <br />\n"; //right now just shows the posting_id } } echo "</div>"; } How do I use a for loop or foreach loop to make it that whenever a user clicks on an ad, it makes an array in the cookie? So it would be like.. 1. clicks on ad.php?posting_id=200 --- setcookie("recentviewitem[1]",200,$month); 2. clicks on ad.php?posting_id=201 --- setcookie("recentviewitem[2]",201,$month); 3. clicks on ad.php?posting_id=202 --- setcookie("recentviewitem[3]",202,$month); Then in the displayrecentitem function, I just echo however many cookies were set? I'm just totally lost in creating a for loop that sets the cookies. any help would be appreciated

    Read the article

  • HTML5 on iPhone Safari - data stored by localStorage does not always persist. Why?

    - by Aerodyne
    Hi, I write a simple iPhone web app using HTML5's localStorage. Tests on a 2G device show that data stored using localStorage does not persist after the Safari process is killed although the opened Safari windows are remembered. The data is also lost in a case where I am on a different site on a different Safari window, then I change the window to where the web app in subject is shown. When Safari loads the page it automatically refreshes the page. Then the data is lost. This is a simple test code: <html> <head> <meta name="viewport" content="height=device-height, width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no" /> </head> <body> <script> alert("1:" + localStorage.getItem("test")); localStorage.setItem("test", "123"); alert("2:" + localStorage.getItem("test")); </script> </body> As far as I understand the data should persist! Can anyone shed some light on this behavior? What should I do to get the persistence to work? Thanks! Tom.

    Read the article

  • How to "End Task" not "Kill" or "Terminate"?

    - by Luiscencio
    Hi community. I have a 3G card to provide internet to a remote computer... I have to run a program(provided with the card) to establish the connection... since connections suddenly is lost I wrote a script that Kills the program and reopens it so that the connection is reestablished, there are certain versions of this program that don't kill the connection when killed/terminated, just when closed properly. so I am looking for a script or program that "Properly Closes" a window so I can close it and reopen it in case the connection is lost. this is the code that kills the program Option Explicit Dim objWMIService, objProcess, colProcess Dim strComputer, strProcessKill strComputer = "." strProcessKill = "'Telcel3G.exe'" Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\\" _ & strComputer & "\root\cimv2") Set colProcess = objWMIService.ExecQuery _ ("Select * from Win32_Process Where Name = " & strProcessKill ) For Each objProcess in colProcess objProcess.Terminate() Next WSCript.Echo "Just killed process " & strProcessKill _ & " on " & strComputer WScript.Quit

    Read the article

  • Is OpenID too complicated?

    - by John Leidegren
    I'm beginning to seriously doubt the OpenID community despite that fact that it works. I'm in the process of currently evaluating OpenID as an authentication service for 'this' site and while the promises are great, I just can't get it to work. And I'm really lost. I ask of the SO community to help me out here. Give me answers and show me examples so I can leverage this in the way it was meant to be. My scenario is very typical. I want to authenticate users through a specific Google Apps domain. If you have access to this Google Apps domain, then you have access to my web application. Where I get lost, is all the prerequisites and dependencies involved. What is XRD? What is Yadis? Why do I need XRD and Yadis? What do I need to do to deploy OpenID authentication on my website? Also, this is really important to me. When I login to SO, I use my Google Account. When I click the login button I'm presented with this confirmation page. Where I'm granting SO the right to use my Google Account credentials. Somehow, Google knows that it's "Stackoverflow.com" that's asking me if it's okay to login. And I wish to know what manner of control I have over this little text. I intend to deploy OpenID on several different domains but I would prefer if they would all work without having to be individually configured with special parameters, such as secret API keys and what not. However, I don't know for sure if this is a prerequisite of OpenID, that or the Federated Login API that Google provides.

    Read the article

  • Windows Azure Evolution &ndash; Caching (Preview)

    - by Shaun
    Caching is a popular topic when we are building a high performance and high scalable system not only on top of the cloud platform but the on-premise environment as well. On March 2011 the Windows Azure AppFabric Caching had been production launched. It provides an in-memory, distributed caching service over the cloud. And now, in this June 2012 update, the cache team announce a grand new caching solution on Windows Azure, which is called Windows Azure Caching (Preview). And the original Windows Azure AppFabric Caching was renamed to Windows Azure Shared Caching.   What’s Caching (Preview) If you had been using the Shared Caching you should know that it is constructed by a bunch of cache servers. And when you want to use you should firstly create a cache account from the developer portal and specify the size you want to use, which means how much memory you can use to store your data that wanted to be cached. Then you can add, get and remove them through your code through the cache URL. The Shared Caching is a multi-tenancy system which host all cached items across all users. So you don’t know which server your data was located. This caching mode works well and can take most of the cases. But it has some problems. The first one is the performance. Since the Shared Caching is a multi-tenancy system, which means all cache operations should go through the Shared Caching gateway and then routed to the server which have the data your are looking for. Even though there are some caches in the Shared Caching system it also takes time from your cloud services to the cache service. Secondary, the Shared Caching service works as a block box to the developer. The only thing we know is my cache endpoint, and that’s all. Someone may satisfied since they don’t want to care about anything underlying. But if you need to know more and want more control that’s impossible in the Shared Caching. The last problem would be the price and cost-efficiency. You pay the bill based on how much cache you requested per month. But when we host a web role or worker role, it seldom consumes all of the memory and CPU in the virtual machine (service instance). If using Shared Caching we have to pay for the cache service while waste of some of our memory and CPU locally. Since the issues above Microsoft offered a new caching mode over to us, which is the Caching (Preview). Instead of having a separated cache service, the Caching (Preview) leverage the memory and CPU in our cloud services (web role and worker role) as the cache clusters. Hence the Caching (Preview) runs on the virtual machines which hosted or near our cloud applications. Without any gateway and routing, since it located in the same data center and same racks, it provides really high performance than the Shared Caching. The Caching (Preview) works side-by-side to our application, initialized and worked as a Windows Service running in the virtual machines invoked by the startup tasks from our roles, we could get more information and control to them. And since the Caching (Preview) utilizes the memory and CPU from our existing cloud services, so it’s free. What we need to pay is the original computing price. And the resource on each machines could be used more efficiently.   Enable Caching (Preview) It’s very simple to enable the Caching (Preview) in a cloud service. Let’s create a new windows azure cloud project from Visual Studio and added an ASP.NET Web Role. Then open the role setting and select the Caching page. This is where we enable and configure the Caching (Preview) on a role. To enable the Caching (Preview) just open the “Enable Caching (Preview Release)” check box. And then we need to specify which mode of the caching clusters we want to use. There are two kinds of caching mode, co-located and dedicate. The co-located mode means we use the memory in the instances we run our cloud services (web role or worker role). By using this mode we must specify how many percentage of the memory will be used as the cache. The default value is 30%. So make sure it will not affect the role business execution. The dedicate mode will use all memory in the virtual machine as the cache. In fact it will reserve some for operation system, azure hosting etc.. But it will try to use as much as the available memory to be the cache. As you can see, the Caching (Preview) was defined based on roles, which means all instances of this role will apply the same setting and play as a whole cache pool, and you can consume it by specifying the name of the role, which I will demonstrate later. And in a windows azure project we can have more than one role have the Caching (Preview) enabled. Then we will have more caches. For example, let’s say I have a web role and worker role. The web role I specified 30% co-located caching and the worker role I specified dedicated caching. If I have 3 instances of my web role and 2 instances of my worker role, then I will have two caches. As the figure above, cache 1 was contributed by three web role instances while cache 2 was contributed by 2 worker role instances. Then we can add items into cache 1 and retrieve it from web role code and worker role code. But the items stored in cache 1 cannot be retrieved from cache 2 since they are isolated. Back to our Visual Studio we specify 30% of co-located cache and use the local storage emulator to store the cache cluster runtime status. Then at the bottom we can specify the named caches. Now we just use the default one. Now we had enabled the Caching (Preview) in our web role settings. Next, let’s have a look on how to consume our cache.   Consume Caching (Preview) The Caching (Preview) can only be consumed by the roles in the same cloud services. As I mentioned earlier, a cache contributed by web role can be connected from a worker role if they are in the same cloud service. But you cannot consume a Caching (Preview) from other cloud services. This is different from the Shared Caching. The Shared Caching is opened to all services if it has the connection URL and authentication token. To consume the Caching (Preview) we need to add some references into our project as well as some configuration in the Web.config. NuGet makes our life easy. Right click on our web role project and select “Manage NuGet packages”, and then search the package named “WindowsAzure.Caching”. In the package list install the “Windows Azure Caching Preview”. It will download all necessary references from the NuGet repository and update our Web.config as well. Open the Web.config of our web role and find the “dataCacheClients” node. Under this node we can specify the cache clients we are going to use. For each cache client it will use the role name to identity and find the cache. Since we only have this web role with the Caching (Preview) enabled so I pasted the current role name in the configuration. Then, in the default page I will add some code to show how to use the cache. I will have a textbox on the page where user can input his or her name, then press a button to generate the email address for him/her. And in backend code I will check if this name had been added in cache. If yes I will return the email back immediately. Otherwise, I will sleep the tread for 2 seconds to simulate the latency, then add it into cache and return back to the page. 1: protected void btnGenerate_Click(object sender, EventArgs e) 2: { 3: // check if name is specified 4: var name = txtName.Text; 5: if (string.IsNullOrWhiteSpace(name)) 6: { 7: lblResult.Text = "Error. Please specify name."; 8: return; 9: } 10:  11: bool cached; 12: var sw = new Stopwatch(); 13: sw.Start(); 14:  15: // create the cache factory and cache 16: var factory = new DataCacheFactory(); 17: var cache = factory.GetDefaultCache(); 18:  19: // check if the name specified is in cache 20: var email = cache.Get(name) as string; 21: if (email != null) 22: { 23: cached = true; 24: sw.Stop(); 25: } 26: else 27: { 28: cached = false; 29: // simulate the letancy 30: Thread.Sleep(2000); 31: email = string.Format("{0}@igt.com", name); 32: // add to cache 33: cache.Add(name, email); 34: } 35:  36: sw.Stop(); 37: lblResult.Text = string.Format( 38: "Cached = {0}. Duration: {1}s. {2} => {3}", 39: cached, sw.Elapsed.TotalSeconds.ToString("0.00"), name, email); 40: } The Caching (Preview) can be used on the local emulator so we just F5. The first time I entered my name it will take about 2 seconds to get the email back to me since it was not in the cache. But if we re-enter my name it will be back at once from the cache. Since the Caching (Preview) is distributed across all instances of the role, so we can scaling-out it by scaling-out our web role. Just use 2 instances and tweak some code to show the current instance ID in the page, and have another try. Then we can see the cache can be retrieved even though it was added by another instance.   Consume Caching (Preview) Across Roles As I mentioned, the Caching (Preview) can be consumed by all other roles within the same cloud service. For example, let’s add another web role in our cloud solution and add the same code in its default page. In the Web.config we add the cache client to one enabled in the last role, by specifying its role name here. Then we start the solution locally and go to web role 1, specify the name and let it generate the email to us. Since there’s no cache for this name so it will take about 2 seconds but will save the email into cache. And then we go to web role 2 and specify the same name. Then you can see it retrieve the email saved by the web role 1 and returned back very quickly. Finally then we can upload our application to Windows Azure and test again. Make sure you had changed the cache cluster status storage account to the real azure account.   More Awesome Features As a in-memory distributed caching solution, the Caching (Preview) has some fancy features I would like to highlight here. The first one is the high availability support. This is the first time I have heard that a distributed cache support high availability. In the distributed cache world if a cache cluster was failed, the data it stored will be lost. This behavior was introduced by Memcached and is followed by almost all distributed cache productions. But Caching (Preview) provides high availability, which means you can specify if the named cache will be backup automatically. If yes then the data belongs to this named cache will be replicated on another role instance of this role. Then if one of the instance was failed the data can be retrieved from its backup instance. To enable the backup just open the Caching page in Visual Studio. In the named cache you want to enable backup, change the Backup Copies value from 0 to 1. The value of Backup Copies only for 0 and 1. “0” means no backup and no high availability while “1” means enabled high availability with backup the data into another instance. But by using the high availability feature there are something we need to make sure. Firstly the high availability does NOT means the data in cache will never be lost for any kind of failure. For example, if we have a role with cache enabled that has 10 instances, and 9 of them was failed, then most of the cached data will be lost since the primary and backup instance may failed together. But normally is will not be happened since MS guarantees that it will use the instance in the different fault domain for backup cache. Another one is that, enabling the backup means you store two copies of your data. For example if you think 100MB memory is OK for cache, but you need at least 200MB if you enabled backup. Besides the high availability, the Caching (Preview) support more features introduced in Windows Server AppFabric Caching than the Windows Azure Shared Caching. It supports local cache with notification. It also support absolute and slide window expiration types as well. And the Caching (Preview) also support the Memcached protocol as well. This means if you have an application based on Memcached, you can use Caching (Preview) without any code changes. What you need to do is to change the configuration of how you connect to the cache. Similar as the Windows Azure Shared Caching, MS also offers the out-of-box ASP.NET session provider and output cache provide on top of the Caching (Preview).   Summary Caching is very important component when we building a cloud-based application. In the June 2012 update MS provides a new cache solution named Caching (Preview). Different from the existing Windows Azure Shared Caching, Caching (Preview) runs the cache cluster within the role instances we have deployed to the cloud. It gives more control, more performance and more cost-effect. So now we have two caching solutions in Windows Azure, the Shared Caching and Caching (Preview). If you need a central cache service which can be used by many cloud services and web sites, then you have to use the Shared Caching. But if you only need a fast, near distributed cache, then you’d better use Caching (Preview).   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Generating EF Code First model classes from an existing database

    - by Jon Galloway
    Entity Framework Code First is a lightweight way to "turn on" data access for a simple CLR class. As the name implies, the intended use is that you're writing the code first and thinking about the database later. However, I really like the Entity Framework Code First works, and I want to use it in existing projects and projects with pre-existing databases. For example, MVC Music Store comes with a SQL Express database that's pre-loaded with a catalog of music (including genres, artists, and songs), and while it may eventually make sense to load that seed data from a different source, for the MVC 3 release we wanted to keep using the existing database. While I'm not getting the full benefit of Code First - writing code which drives the database schema - I can still benefit from the simplicity of the lightweight code approach. Scott Guthrie blogged about how to use entity framework with an existing database, looking at how you can override the Entity Framework Code First conventions so that it can work with a database which was created following other conventions. That gives you the information you need to create the model classes manually. However, it turns out that with Entity Framework 4 CTP 5, there's a way to generate the model classes from the database schema. Once the grunt work is done, of course, you can go in and modify the model classes as you'd like, but you can save the time and frustration of figuring out things like mapping SQL database types to .NET types. Note that this template requires Entity Framework 4 CTP 5 or later. You can install EF 4 CTP 5 here. Step One: Generate an EF Model from your existing database The code generation system in Entity Framework works from a model. You can add a model to your existing project and delete it when you're done, but I think it's simpler to just spin up a separate project to generate the model classes. When you're done, you can delete the project without affecting your application, or you may choose to keep it around in case you have other database schema updates which require model changes. I chose to add the Model classes to the Models folder of a new MVC 3 application. Right-click the folder and select "Add / New Item..."   Next, select ADO.NET Entity Data Model from the Data Templates list, and name it whatever you want (the name is unimportant).   Next, select "Generate from database." This is important - it's what kicks off the next few steps, which read your database's schema.   Now it's time to point the Entity Data Model Wizard at your existing database. I'll assume you know how to find your database - if not, I covered that a bit in the MVC Music Store tutorial section on Models and Data. Select your database, uncheck the "Save entity connection settings in Web.config" (since we won't be using them within the application), and click Next.   Now you can select the database objects you'd like modeled. I just selected all tables and clicked Finish.   And there's your model. If you want, you can make additional changes here before going on to generate the code.   Step Two: Add the DbContext Generator Like most code generation systems in Visual Studio lately, Entity Framework uses T4 templates which allow for some control over how the code is generated. K Scott Allen wrote a detailed article on T4 Templates and the Entity Framework on MSDN recently, if you'd like to know more. Fortunately for us, there's already a template that does just what we need without any customization. Right-click a blank space in the Entity Framework model surface and select "Add Code Generation Item..." Select the Code groupt in the Installed Templates section and pick the ADO.NET DbContext Generator. If you don't see this listed, make sure you've got EF 4 CTP 5 installed and that you're looking at the Code templates group. Note that the DbContext Generator template is similar to the EF POCO template which came out last year, but with "fix up" code (unnecessary in EF Code First) removed.   As soon as you do this, you'll two terrifying Security Warnings - unless you click the "Do not show this message again" checkbox the first time. It will also be displayed (twice) every time you rebuild the project, so I checked the box and no immediate harm befell my computer (fingers crossed!).   Here's the payoff: two templates (filenames ending with .tt) have been added to the project, and they've generated the code I needed.   The "MusicStoreEntities.Context.tt" template built a DbContext class which holds the entity collections, and the "MusicStoreEntities.tt" template build a separate class for each table I selected earlier. We'll customize them in the next step. I recommend copying all the generated .cs files into your application at this point, since accidentally rebuilding the generation project will overwrite your changes if you leave them there. Step Three: Modify and use your POCO entity classes Note: I made a bunch of tweaks to my POCO classes after they were generated. You don't have to do any of this, but I think it's important that you can - they're your classes, and EF Code First respects that. Modify them as you need for your application, or don't. The Context class derives from DbContext, which is what turns on the EF Code First features. It holds a DbSet for each entity. Think of DbSet as a simple List, but with Entity Framework features turned on.   //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Data.Entity; public partial class Entities : DbContext { public Entities() : base("name=Entities") { } public DbSet<Album> Albums { get; set; } public DbSet<Artist> Artists { get; set; } public DbSet<Cart> Carts { get; set; } public DbSet<Genre> Genres { get; set; } public DbSet<OrderDetail> OrderDetails { get; set; } public DbSet<Order> Orders { get; set; } } } It's a pretty lightweight class as generated, so I just took out the comments, set the namespace, removed the constructor, and formatted it a bit. Done. If I wanted, though, I could have added or removed DbSets, overridden conventions, etc. using System.Data.Entity; namespace MvcMusicStore.Models { public class MusicStoreEntities : DbContext { public DbSet Albums { get; set; } public DbSet Genres { get; set; } public DbSet Artists { get; set; } public DbSet Carts { get; set; } public DbSet Orders { get; set; } public DbSet OrderDetails { get; set; } } } Next, it's time to look at the individual classes. Some of mine were pretty simple - for the Cart class, I just need to remove the header and clean up the namespace. //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Cart { // Primitive properties public int RecordId { get; set; } public string CartId { get; set; } public int AlbumId { get; set; } public int Count { get; set; } public System.DateTime DateCreated { get; set; } // Navigation properties public virtual Album Album { get; set; } } } I did a bit more customization on the Album class. Here's what was generated: //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Album { public Album() { this.Carts = new HashSet(); this.OrderDetails = new HashSet(); } // Primitive properties public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } // Navigation properties public virtual Artist Artist { get; set; } public virtual Genre Genre { get; set; } public virtual ICollection Carts { get; set; } public virtual ICollection OrderDetails { get; set; } } } I removed the header, changed the namespace, and removed some of the navigation properties. One nice thing about EF Code First is that you don't have to have a property for each database column or foreign key. In the Music Store sample, for instance, we build the app up using code first and start with just a few columns, adding in fields and navigation properties as the application needs them. EF Code First handles the columsn we've told it about and doesn't complain about the others. Here's the basic class: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { public class Album { public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List OrderDetails { get; set; } } } It's my class, not Entity Framework's, so I'm free to do what I want with it. I added a bunch of MVC 3 annotations for scaffolding and validation support, as shown below: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { [Bind(Exclude = "AlbumId")] public class Album { [ScaffoldColumn(false)] public int AlbumId { get; set; } [DisplayName("Genre")] public int GenreId { get; set; } [DisplayName("Artist")] public int ArtistId { get; set; } [Required(ErrorMessage = "An Album Title is required")] [StringLength(160)] public string Title { get; set; } [Required(ErrorMessage = "Price is required")] [Range(0.01, 100.00, ErrorMessage = "Price must be between 0.01 and 100.00")] public decimal Price { get; set; } [DisplayName("Album Art URL")] [StringLength(1024)] public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List<OrderDetail> OrderDetails { get; set; } } } The end result was that I had working EF Code First model code for the finished application. You can follow along through the tutorial to see how I built up to the finished model classes, starting with simple 2-3 property classes and building up to the full working schema. Thanks to Diego Vega (on the Entity Framework team) for pointing me to the DbContext template.

    Read the article

  • 'The RPC server is unavailable' when converting a physical ISA/Forefront TMG machine to virtual (P2V) in SCVMM

    - by Goran B.
    When I try to convert a physical ISA/TMG machine to virtual using SCVMM, i keep getting an error in the Collect machine configuration step ('Scan Now' button): VMM is unable to complete the request. The connection to the agent MACHINE_NAME was lost. Ensure that the computer MACHINE_NAME exists on the network, WMI service and the agent are installed and running and that a firewall is not blocking HTTP and WMI traffic. ID: 3157 Details: The RPC server is unavailable (0x800706BA) Firewall rules allow for RPC traffic from the SCVMM machine to ISA/TMG machine.

    Read the article

  • ubuntu 12.04 is unresponsive

    - by Andreas Schiemann
    I've been using ubuntu 12.04 for 3 weeks. Today it become unresponsive and I lost stuff I was working on, I had the hand displayed, as if I selected something but in fact never did, but was reading an article online. I finally powered down the machine and then turned it back on. In windows there is the Task Manager where one can see whats taking up all your CPU or RAM, is there a such a program in ubuntu 12.04 and if not, can I install a app that has the same use?

    Read the article

  • Convert info pages to man pages

    - by mbac32768
    I was invited to re-post this question with less opinion, so if it seems familiar, that's why. How can I convert info pages into man pages? I used to have a shell one liner that flattened an entire info document into a single flat page, suitable for navigating with less, but I seem to have lost it. Thanks!

    Read the article

  • Two network adapters on Ubuntu Server 9.10 - Can't have both working at once?

    - by Rob
    I'm trying to set up two network adapters in Ubuntu (server edition) 9.10. One for the public internet, the other a private LAN. During the install, I was asked to pick a primary network adapter (eth0 or eth1). I chose eth0, gave the installer the details listed below in the contents of /etc/network/interfaces, and carried on. I've been using this adapter with these setting for the last few days, and every thing's been fine. Today, I decide it's time to set up the local adapter. I edit the /etc/network/interfaces to add the details for eth1 (see below), and restart networking with sudo /etc/init.d/networking restart. After this, attempting to ping the machine using it's external IP address fails, but I can ping it's local IP address. If I bring eth1 down using sudo ifdown eth1, I can successfully ping the machine via it's external IP address again (but obviously not it's internal IP address). Bringing eth1 back up returns us to the original problem state: external IP not working, internal IP working. Here's my /etc/network/interfaces (I've removed the external IP information, but these settings are unchanged from when it worked) rob@rhea:~$ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary (public) network interface auto eth0 iface eth0 inet static address xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx network xxx.xxx.xxx.xxx broadcast xxx.xxx.xxx.xxx gateway xxx.xxx.xxx.xxx # The secondary (private) network interface auto eth1 iface eth1 inet static address 192.168.99.4 netmask 255.255.255.0 network 192.168.99.0 broadcast 192.168.99.255 gateway 192.168.99.254 I then do this: rob@rhea:~$ sudo /etc/init.d/networking restart * Reconfiguring network interfaces... [ OK ] rob@rhea:~$ sudo ifup eth0 ifup: interface eth0 already configured rob@rhea:~$ sudo ifup eth1 ifup: interface eth1 already configured Then, from another machine: C:\Documents and Settings\Rob>ping [external ip] Pinging [external ip] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for [external ip]: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), Back on the Ubuntu server in question: rob@rhea:~$ sudo ifdown eth1 ... and again on the other machine: C:\Documents and Settings\Rob>ping [external ip] Pinging [external ip] with 32 bytes of data: Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Ping statistics for [external ip]: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms So... what am I doing wrong?

    Read the article

  • What features does enabling ACPI 2.0 support in BIOS enable?

    - by glenneroo
    So I was booting up my box and for some odd reason my BIOS had lost settings (again), thus resetting everything to defaults. I was digging around making sure things were configured to my liking and noticed the option to enable ACPI 2.0 support. It was disabled by default but I was wondering: Do I need ACPI 2.0 support? Motherboard is ASUS M3A79-T Deluxe. I should also note that I primarily use Windows-7.

    Read the article

  • HTTPS Stunnel and Haproxy

    - by panalbish
    I am trying to use stunnel infront of Haproxy for SSL support. SSL certificates are located according to stunnel configuration. I am also able to get the https connection, but every time I use https, session get lost. I am not using tomcat 8443 port to get the secure content. Is is possible to get the https connection only using stunnel and haproxy? And my requirement is to have https connection once user get logged in.

    Read the article

  • OneNote 2007 - recommendation(s) of a good place/tutorials to learn features

    - by studiohack23
    I'm pretty familiar with Office 2007, however, I have just recently acquired OneNote 2007. It is such a big and powerful tool, that I'm pretty much lost on the features and how to use it. I don't really know where to start. I'm looking for some recommendation(s) on a good place to learn more about OneNote and its features and what it does...for what it's worth, I'm a student, so student perspectives on how to use/learn OneNote would be awesome! Thanks!

    Read the article

  • Wireless Router will not allow connection

    - by Chris
    I am working with a RangeMax(TM) Wireless Router WPN824 v2, and it will not allow a wireless connection nor a wired connection. I am at an actual lost, with no way into the router I cannot test settings. I have unplugged it, left it off for a 32 hours, and reset it with no luck. Any ideas? Could the router just be done-zo?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >