Search Results

Search found 6686 results on 268 pages for 'catch all'.

Page 220/268 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • Dynamically load and call delegates based on source data

    - by makerofthings7
    Assume I have a stream of records that need to have some computation. Records will have a combination of these functions run Sum, Aggregate, Sum over the last 90 seconds, or ignore. A data record looks like this: Date;Data;ID Question Assuming that ID is an int of some kind, and that int corresponds to a matrix of some delegates to run, how should I use C# to dynamically build that launch map? I'm sure this idea exists... it is used in Windows Forms which has many delegates/events, most of which will never actually be invoked in a real application. The sample below includes a few delegates I want to run (sum, count, and print) but I don't know how to make the quantity of delegates fire based on the source data. (say print the evens, and sum the odds in this sample) using System; using System.Threading; using System.Collections.Generic; internal static class TestThreadpool { delegate int TestDelegate(int parameter); private static void Main() { try { // this approach works is void is returned. //ThreadPool.QueueUserWorkItem(new WaitCallback(PrintOut), "Hello"); int c = 0; int w = 0; ThreadPool.GetMaxThreads(out w, out c); bool rrr =ThreadPool.SetMinThreads(w, c); Console.WriteLine(rrr); // perhaps the above needs time to set up6 Thread.Sleep(1000); DateTime ttt = DateTime.UtcNow; TestDelegate d = new TestDelegate(PrintOut); List<IAsyncResult> arDict = new List<IAsyncResult>(); int count = 1000000; for (int i = 0; i < count; i++) { IAsyncResult ar = d.BeginInvoke(i, new AsyncCallback(Callback), d); arDict.Add(ar); } for (int i = 0; i < count; i++) { int result = d.EndInvoke(arDict[i]); } // Give the callback time to execute - otherwise the app // may terminate before it is called //Thread.Sleep(1000); var res = DateTime.UtcNow - ttt; Console.WriteLine("Main program done----- Total time --> " + res.TotalMilliseconds); } catch (Exception e) { Console.WriteLine(e); } Console.ReadKey(true); } static int PrintOut(int parameter) { // Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Delegate PRINTOUT waited and printed this:"+parameter); var tmp = parameter * parameter; return tmp; } static int Sum(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static int Count(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static void Callback(IAsyncResult ar) { TestDelegate d = (TestDelegate)ar.AsyncState; //Console.WriteLine("Callback is delayed and returned") ;//d.EndInvoke(ar)); } }

    Read the article

  • iOS 5 New Features vs Android

    - by kerry
    Browsing through the iOS 5 features list, I can’t help but notice a lot of it is catch up. Having owned both an iPhone and an Android for a considerable amount of time, I figured I would jot down my opinions. Notification Center – Completely ripped off from Android but looks good and is a much needed addition iMessage – This is very interesting as most people who would think it’s cool, probably really wouldn’t understand the significance.  Basically, Apple is adding an IM application to iOS.  Now iPhone / iPad users can sit around messaging each other how cool it is like Crackberry users circa 2003.  I guess the only real improvement over MMS is that you can easily setup groups, see when each other are typing, and don’t incur text messaging charges; at the expense of leaving your non-iOS buddies out (who wants to talk to those losers anyways?). Newstand – An app update and not an OS one (Apple typically doesn’t make distinctions).  It all seems like stuff my current Nook stuff will do.  Note: I did look to compare prices but it seems that information is not available without downloading iTunes.  lame. Reminders – TODO lists are ho hum, but the ability to have reminders when you arrive or leave a position is pretty cool. Twitter integration – The fact that the best Apple can come up with is ‘one at a timing’ online service integration is laughable at best. Camera – Can control it from the lock screen.  Now you’ll have tons of pocket lint photos in your iCloud to go along with the wicked shot of that cheetah that just unexpectedly ran by your apartment. Photos – Speaking of iCloud, all of your devices photos will be synced through it.  That’s cool I guess, not sure if Android will do the same. Safari – What?  You haven’t been reading rss feeds on your device this whole time?  Something tells me you aren’t about to start. PC Free – Finely Apple untethers the iPhone.  What took them so long? Game Center – This should be an interesting service.  Attention Apple fanboys immediately forget how they are blatantly copying Microsoft achievements (at least rename them). Wifi Sync – Just couldn’t cut the cord completely could they?  For what it’s worth, the Zune has been doing this for 5 years now. All in all a pretty big update.  Mostly iCloud.  Mostly keeping up the mobile status quo.  As an Android user, I can’t say there is anything I am envious of.

    Read the article

  • Generic Adjacency List Graph implementation

    - by DmainEvent
    I am trying to come up with a decent Adjacency List graph implementation so I can start tooling around with all kinds of graph problems and algorithms like traveling salesman and other problems... But I can't seem to come up with a decent implementation. This is probably because I am trying to dust the cobwebs off my data structures class. But what I have so far... and this is implemented in Java... is basically an edgeNode class that has a generic type and a weight-in the event the graph is indeed weighted. public class edgeNode<E> { private E y; private int weight; //... getters and setters as well as constructors... } I have a graph class that has a list of edges a value for the number of Vertices and and an int value for edges as well as a boolean value for whether or not it is directed. The brings up my first question, if the graph is indeed directed, shouldn't I have a value in my edgeNode class? Or would I just need to add another vertices to my LinkedList? That would imply that a directed graph is 2X as big as an undirected graph wouldn't it? public class graph { private List<edgeNode<?>> edges; private int nVertices; private int nEdges; private boolean directed; //... getters and setters as well as constructors... } Finally does anybody have a standard way of initializing there graph? I was thinking of reading in a pipe-delimited file but that is so 1997. public graph GenereateGraph(boolean directed, String file){ List<edgeNode<?>> edges; graph g; try{ int count = 0; String line; FileReader input = new FileReader("C:\\Users\\derekww\\Documents\\JavaEE Projects\\graphFile"); BufferedReader bufRead = new BufferedReader(input); line = bufRead.readLine(); count++; edges = new ArrayList<edgeNode<?>>(); while(line != null){ line = bufRead.readLine(); Object edgeInfo = line.split("|")[0]; int weight = Integer.parseInt(line.split("|")[1]); edgeNode<String> e = new edgeNode<String>((String) edges.add(e); } return g; } catch(Exception e){ return null; } } I guess when I am adding edges if boolean is true I would be adding a second edge. So far, this all depends on the file I write. So if I wrote a file with the following Vertices and weights... Buffalo | 18 br Pittsburgh | 20 br New York | 15 br D.C | 45 br I would obviously load them into my list of edges, but how can I represent one vertices connected to the other... so on... I would need the opposite vertices? Say I was representing Highways connected to each city weighted and un-directed (each edge is bi-directional with weights in some fictional distance unit)... Would my implementation be the best way to do that? I found this tutorial online Graph Tutorial that has a connector object. This appears to me be a collection of vertices pointing to each other. So you would have A and B each with there weights and so on, and you would add this to a list and this list of connectors to your graph... That strikes me as somewhat cumbersome and a little dismissive of the adjacency list concept? Am I wrong and that is a novel solution? This is all inspired by steve skiena's Algorithm Design Manual. Which I have to say is pretty good so far. Thanks for any help you can provide.

    Read the article

  • Using data input from pop-up page to current with partial refresh

    - by dpDesignz
    I'm building a product editor webpage using visual C#. I've got an image uploader popping up using fancybox, and I need to get the info from my fancybox once submitted to go back to the first page without clearing any info. I know I need to use ajax but how would I do it? <%@ Page Language="C#" AutoEventWireup="true" CodeFile="uploader.aspx.cs" Inherits="uploader" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head id="Head1" runat="server"> <title></title> </head> <body style="width:350px; height:70px;"> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <div> <div style="width:312px; height:20px; background-color:Gray; color:White; padding-left:8px; margin-bottom:4px; text-transform:uppercase; font-weight:bold;">Uploader</div> <asp:FileUpload id="fileUp" runat="server" /> <asp:Button runat="server" id="UploadButton" text="Upload" onclick="UploadButton_Click" /> <br /><asp:Label ID="txtFile" runat="server"></asp:Label> <div style="width:312px; height:15px; background-color:#CCCCCC; color:#4d4d4d; padding-right:8px; margin-top:4px; text-align:right; font-size:x-small;">Click upload to insert your image into your product</div> </div> </form> </body> </html> CS so far using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Configuration; // Add to page using System.Web.UI; using System.Web.UI.WebControls; using System.Data; // Add to the page using System.Data.SqlClient; // Add to the page using System.Text; // Add to Page public partial class uploader : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void UploadButton_Click(object sender, EventArgs e) { if (fileUp.HasFile) try { fileUp.SaveAs("\\\\london\\users\\DP006\\Websites\\images\\" + fileUp.FileName); string imagePath = fileUp.PostedFile.FileName; } catch (Exception ex) { txtFile.Text = "ERROR: " + ex.Message.ToString(); } finally { } else { txtFile.Text = "You have not specified a file."; } } }

    Read the article

  • Executing Stored Procedures in Visual Studio LightSwitch.

    - by dataintegration
    A LightSwitch Project is very easy way to visualize and manipulate information directly from one of our ADO.NET Providers. But when it comes to executing the Stored Procedures, it can be a bit more complicated. In this article, we will demonstrate how to execute a Stored Procedure in LightSwitch. For the purposes of this article, we will be using the RSSBus Email Data Provider, but the same process will work with any of our ADO.NET Providers. Creating the RIA Service. Step 1: Open Visual Studio and create a new WCF RIA Service Class Project. Step 2:Add the reference to the RSSBus Email Data Provider dll in the (ProjectName).Web project. Step 3: Add a new Domain Service Class to the (ProjectName).Web project. Step 4: In the new Domain Service Class, create a new class with the attributes needed for the Stored Procedure's parameters. In this demo, the Stored Procedure we are executing is called SendMessage. The parameters we will need are as follows: public class NewMessage{ [Key] public int ID { get; set; } public string FromEmail { get; set; } public string ToEmail { get; set; } public string Subject { get; set; } public string Text { get; set; } } Note: The created class must have an ID which will serve as the key value. Step 5: Create a new method that will executed when the insert event fires. Inside this method you can use the standards ADO.NET code which will execute the stored procedure. [Insert] public void SendMessage(NewMessage newMessage) { try { EmailConnection conn = new EmailConnection(connectionString); EmailCommand comm = new EmailCommand("SendMessage", conn); comm.CommandType = System.Data.CommandType.StoredProcedure; if (!newMessage.FromEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@From", newMessage.FromEmail)); if (!newMessage.ToEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@To", newMessage.ToEmail)); if (!newMessage.Subject.Equals("")) comm.Parameters.Add(new EmailParameter("@Subject", newMessage.Subject)); if (!newMessage.Text.Equals("")) comm.Parameters.Add(new EmailParameter("@Text", newMessage.Text)); comm.ExecuteNonQuery(); } catch (Exception exc) { Console.WriteLine(exc.Message); } } Step 6: Create a query method. We are not going to be using getNewMessages(), so it does not matter what it returns for the purpose of our example, but you will need to create a method for the query event as well. [Query(IsDefault=true)] public IEnumerable<NewMessage> getNewMessages() { return null; } Step 7: Rebuild the whole solution. Creating the LightSwitch Project. Step 8: Open Visual Studio and create a new LightSwitch Application Project. Step 9: On the Data Sources, add a new data source. Choose a WCF RIA Service Step 10: Choose to add a new reference and select the (Project Name).Web.dll generated from the RIA Service. Step 11: Select the entities you would like to import. In this case, we are using the recently created NewMessage entity. Step 13: On the Screens section, create a new screen and select the NewMessage entity as the Screen Data. Step 14: After you run the project, you will be able to add a new record and save it. This will execute the Stored Procedure and send the new message. If you create a screen to check the sent messages, you can refresh this screen to see the mail you sent. Sample Project To help you with get started using stored procedures in LightSwitch, download the fully functional sample project. You will also need the RSSBus Email Data Provider to make the connection. You can download a free trial here.

    Read the article

  • Integrating Amazon S3 in Java via NetBeans IDE

    - by Geertjan
    To continue from yesterday, let's set up a scenario that enables us to make use of this drag/drop service in NetBeans IDE: The above service is applicable to Amazon S3, an Amazon storage provider that is typically used to store large binary files. In Amazon S3, every object stored is contained in a bucket. Buckets partition the namespace of objects stored in Amazon S3. More on buckets here. Let's use the tools in NetBeans IDE to create a Java application that accesses our Amazon S3 buckets. Create a Java application named "AmazonBuckets" with a main class named "AmazonBuckets". Open the main class and then drag the above service into the main method of the class. Now, NetBeans IDE will create all the other classes and the properties file that you see in the screenshot below. The first thing to do is to open the properties file above and enter the access key and secret: access_key=SOMETHINGsecret=SOMETHINGELSE Now you're all set up. Make sure to, of course, actually have some buckets available: Then rewrite the Java class to parse the XML that is returned via the generated code: package amazonbuckets;import java.io.ByteArrayInputStream;import java.io.IOException;import javax.xml.parsers.DocumentBuilder;import javax.xml.parsers.DocumentBuilderFactory;import javax.xml.parsers.ParserConfigurationException;import org.netbeans.saas.amazon.AmazonS3Service;import org.netbeans.saas.RestResponse;import org.w3c.dom.DOMException;import org.w3c.dom.Document;import org.w3c.dom.Node;import org.w3c.dom.NodeList;import org.xml.sax.InputSource;import org.xml.sax.SAXException;public class AmazonBuckets {    public static void main(String[] args) {        try {            RestResponse result = AmazonS3Service.getBuckets();            String dataAsString = result.getDataAsString();            DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();            DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();            Document doc = dBuilder.parse(                    new InputSource(new ByteArrayInputStream(dataAsString.getBytes("utf-8"))));            NodeList bucketList = doc.getElementsByTagName("Bucket");            for (int i = 0; i < bucketList.getLength(); i++) {                Node node = bucketList.item(i);                System.out.println("Bucket Name: " + node.getFirstChild().getTextContent());            }        } catch (IOException | ParserConfigurationException | SAXException | DOMException ex) {        }    }}That's all. This is simpler to setup than the scenario described yesterday. Also notice that there are other Amazon S3 services you can interact with from your Java code, again after generating a heap of code after drag/drop into a Java source file: I tried the above, e.g., I created a new Amazon S3 bucket after dragging "createBucket", adding my credentials in the properties file, and then running the code that had been created. I.e., without adding a single line of code I was able to programmatically create new buckets. The above outlines a handy set of tools and techniques to use if you want to let your users store and access data in Amazon S3 buckets directly from the application you've created for them.

    Read the article

  • why my code still cannot connect with database? [closed]

    - by Wen Teng
    package com.mems.travis; import java.util.ArrayList; import java.util.List; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONObject; import android.app.Activity; import android.app.AlertDialog; import android.content.DialogInterface; import android.content.Intent; import android.os.AsyncTask; import android.os.Bundle; import android.util.Log; import android.view.View; import android.widget.Button; import android.widget.EditText; import android.widget.RadioButton; public class UserRegister extends Activity { JSONParser jsonParser = new JSONParser(); EditText inputName; EditText inputUsername; EditText inputEmail; EditText inputPassword; RadioButton button1; RadioButton button2; Button button3; int success = 0; // url to create new product private static String url_register_user = "http://192.168.1.100/MEMS/add_user.php"; // JSON Node names private static final String TAG_SUCCESS = "success"; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_user_register); // Edit Text inputName = (EditText) findViewById(R.id.nameTextBox); inputUsername = (EditText) findViewById(R.id.usernameTextBox); inputEmail = (EditText) findViewById(R.id.emailTextBox); inputPassword = (EditText) findViewById(R.id.pwTextBox); // Create button //RadioButton button1 = (RadioButton) findViewById(R.id.studButton); // RadioButton button2 = (RadioButton) findViewById(R.id.shopownerButton); Button button3 = (Button) findViewById(R.id.regSubmitButton); // button click event button3.setOnClickListener(new View.OnClickListener() { public void onClick(View view) { String name = inputName.getText().toString(); String username = inputUsername.getText().toString(); String email = inputEmail.getText().toString(); String password = inputPassword.getText().toString(); if (name.contentEquals("")||username.contentEquals("")||email.contentEquals("")||password.contentEquals("")) { AlertDialog.Builder builder = new AlertDialog.Builder(UserRegister.this); // 2. Chain together various setter methods to set the dialog characteristics builder.setMessage(R.string.nullAlert) .setTitle(R.string.alertTitle); builder.setPositiveButton(R.string.ok, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { // User clicked OK button } }); // 3. Get the AlertDialog from create() AlertDialog dialog = builder.show(); } else { new RegisterNewUser().execute(); } } }); } class RegisterNewUser extends AsyncTask<String, String, String>{ protected String doInBackground(String... args) { String name = inputName.getText().toString(); String username = inputUsername.getText().toString(); String email = inputEmail.getText().toString(); String password = inputPassword.getText().toString(); // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("name", name)); params.add(new BasicNameValuePair("username", username)); params.add(new BasicNameValuePair("email", email)); params.add(new BasicNameValuePair("password", password)); // getting JSON Object // Note that create product url accepts POST method JSONObject json = jsonParser.makeHttpRequest(url_register_user, "GET", params); // check log cat for response Log.d("Send Notification", json.toString()); try { int success = json.getInt(TAG_SUCCESS); if (success == 1) { // successfully created product Intent i = new Intent(getApplicationContext(), StudentLogin.class); startActivity(i); finish(); } else { // failed to register } } catch (Exception e) { e.printStackTrace(); } return null; } } }

    Read the article

  • Z axis trouble with glTranslatef(...) - LWJGL

    - by Zarkopafilis
    Here is the code: private static boolean up = true , down = false , left = false , right = false, reset = false, in = false , out = false; public void start() { try { Display.setDisplayMode(new DisplayMode(800,600)); Display.create(); } catch (LWJGLException e) { e.printStackTrace(); System.exit(0); } GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); GL11.glOrtho(0, 800, 0, 600, 0.00001f, 1000); GL11.glMatrixMode(GL11.GL_MODELVIEW); Keyboard.enableRepeatEvents(true); while (!Display.isCloseRequested()) { GL11.glClear(GL11.GL_COLOR_BUFFER_BIT); input(); if(up){ GL11.glTranslatef(0,0.1f,0); } if(down){ GL11.glTranslatef(0,-0.1f,0); } if(left){ GL11.glTranslatef(-0.1f,0,0); } if(right){ GL11.glTranslatef(0.1f,0,0); } if(in){ GL11.glTranslatef(0, 0, 1f); } if(out){ GL11.glTranslatef(0, 0, -1f); } if(reset){ GL11.glLoadIdentity(); } GL11.glBegin(GL11.GL_QUADS); GL11.glColor3f(255, 255, 255); GL11.glVertex3f(800/2, 600/2, 0); GL11.glVertex3f(800/2 + 200, 600/2, 0); GL11.glVertex3f(800/2 + 200, 600/2 + 200, 0); GL11.glVertex3f(800/2, 600/2 + 200, 0); GL11.glColor3f(0, 255, 0); GL11.glVertex3f(800/2, 600/2, 1); GL11.glVertex3f(800/2 + 200, 600/2, 1); GL11.glVertex3f(800/2 + 200, 600/2 + 200, 1); GL11.glVertex3f(800/2, 600/2 + 200, 1); GL11.glEnd(); Display.update(); } Display.destroy(); } public static void main(String[] argv){ new main().start(); } public void input(){ up = false; down = false; left = false; right = false; reset = false; in = false; out = false; if(Keyboard.isKeyDown(Keyboard.KEY_SPACE)){ reset = true; } if(Keyboard.isKeyDown(Keyboard.KEY_W)){ up = true; } if(Keyboard.isKeyDown(Keyboard.KEY_S)){ down = true; } if(Keyboard.isKeyDown(Keyboard.KEY_A)){ left = true; } if(Keyboard.isKeyDown(Keyboard.KEY_D)){ right = true; } if(Keyboard.isKeyDown(Keyboard.KEY_Q)){ in = true; } if(Keyboard.isKeyDown(Keyboard.KEY_E)){ out = true; } } As you can see I am creating 2 quads , a white one at z 0 and a green one at z 1. WASD keys function correctly. Also when I hit SPACEBAR the white quad is being shown. If I then press E , I can see the green quad. But if I press Q afterwards , I dont see the white one again!(Space Works everytime). Also if I render the green one at Z = -1 everything works perfectly BUT you may need up to 3 key presses Q/E to see the other quad. Why is that happening?

    Read the article

  • ??GoldenGate?LAG???

    - by Liu Maclean(???)
    GGSCI????LAG?? ????????????????Oracle?redo????online redo logfile? ? Replicat????????????????? ???????? ????,?????????????????LAG; ????????????????REPLICAT??apply???????????? OGG????RANGE??????????,????????REPLICATE??APPLY? OGG??MAXTRANSOPS???????? LAG?????????: ?Extract?????redolog????TRAIL?REMOTE HOST ????datapump???extract trail????????????REMOTE HOST ?collector?????????????????LOCAL TRAIL ?REPLICAT??LOCAL TRAIL???????? ?????????GGSCI?INFO?STATUS??????LAG,???SEND ???,LAG?????LAG?????: INFO??????LAG???SEND??????????? INFO?????LAG???MANAGER????????checkpoint SEND <OBJECT>, lag???LAG???<OBJECT>???????????? LAG?????????????????Kilobytes??? ????LAG??? ????????????? ? EXTRACT/PUMP/REPLICAT???????? ?2?????????, ???? LAG???EXTRACT??????? ??EXTRACT/PUMP/REPLICAT??????????????? REAL TIME,???LAG????? ?????????????? ????????REDO LOG?????????,?LAG???ER???????,?????????????? ??????,STOP EXTRACT?????????????????LAG,????EXTRACT?????,??EXTRACT????????? ????REDO LOG???? ?EXTRACT??????????????????? GGSCI (XIANGBLI-CN) 27> stop load2 Sending STOP request to EXTRACT LOAD2 … Request processed. GGSCI (XIANGBLI-CN) 28> start load2 Sending START request to MANAGER … EXTRACT LOAD2 starting GGSCI (XIANGBLI-CN) 31> info load2 EXTRACT    LOAD2     Last Started 2012-09-18 20:26   Status RUNNING Checkpoint Lag       00:04:34 (updated 00:00:08 ago) Log Read Checkpoint  Oracle Redo Logs 2012-09-18 20:21:32  Seqno 44, RBA 13750272 SCN 0.1845479 (1845479) GGSCI (XIANGBLI-CN) 35> lag load2 Sending GETLAG request to EXTRACT LOAD2 … Last record lag: 130 seconds. At EOF, no more records to process. GGSCI (XIANGBLI-CN) 36> info load2 EXTRACT    LOAD2     Last Started 2012-09-18 20:26   Status RUNNING Checkpoint Lag       00:00:00 (updated 00:00:02 ago) Log Read Checkpoint  Oracle Redo Logs 2012-09-18 20:27:33  Seqno 44, RBA 13817856 SCN 0.1845671 (1845671) ?????? Last record lag ? Checkpoint Lag ???? EXTRACT/PUMP/REPLICAT ?????????????(catch up), ???? ?????????????GB?redo???,??????EXTRACT/PUMP/REPLICAT ????????? ???INFO?LAG???checkpoint?,????????????Long Running Transactions (LRTs),??????????COMMIT? ????????????????????????COMMIT?????? ????EXTRACT/PUMP/REPLICAT???????????????????????commit????? ??REPLICAT????MAXTRANSOPS ?????LAG?

    Read the article

  • Simple Backup Strategy for Amazon EC2 instances / volumes?

    - by minerj
    You have entered Introductory Backups for Amazon EC2 EBS-backed Windows Images 010... I have been browsing my brains out to find a simple backup strategy for our single windows 2008 server running SharePoint Services. This is an EBS-backed image of one server with one data volume. I don’t need anything exotic. I only need a “daily” backup (losing a day’s worth of data is not catastrophic). We have created and saved an EBS backed AMI image (Windows 2008) we are comfortable using. We started off making backups by simply creating a new EBS AMI image. This is really simple, but the running server is put offline during the first 10 – 15 minutes of creating the image – not ideal. The standard way of creating backups would seem to be creating snapshots of volumes attached to a running instance. Again it’s pretty simple and the server remains usable during the snapshot generation. The apparent Catch-22 is that you can’t simply launch a new instance directly from a snapshot. I know how to bundle a running instance to S3 storage and then register the AMI from the S3 bucket. This does allow me to capture a backup of a running instance and, if the running instance is lost, register the AMI from the S3 bucket and launch the new AMI to recover the instance, but this seems really convoluted and it seems ridiculous to have to juggle back and forth between the AWS Console and the S3 Organizer plug-in for Firefox to get this accomplished. (Please don't mention the command line approach, this is an 010 level course). From playing around with EBS-backed images, the following approach appears to work for me (all done within the AWS Console): 1.For your backups, simply snapshot the system volume (/dev/sda1) as needed. 2.If you lose your running instance, do the following: a.Create a new volume from your last snapshot backup b.Launch another instance of your starting AMI (must be EBS-backed) c.Stop this instance. d.Detach the existing system volume from the new stopped instance and discard. e.Attach the newly created volume as system volume (/dev/sda1) to the stopped instance. f.Re-start the new instance. I have tested this out a couple of times and it seems to work for me. Question: Is there anything wrong with this approach?

    Read the article

  • The Cindy Shearin Group: New Scam Targets Renters in the Area

    - by user226089
    MONROE - Craigslist is a popular site when trying to find that perfect deal on a rental home or apartment. Experts warn some of these rental ads aren't what they seem. We decided to take a look. On our Craigslist search we found this house for rent. The problem is this home’s not for rent - it's for sale. “I think it’s a huge deal,” said Shane Wooten, the realtor for this home in Monroe. His properties have become the target of a common scam, aimed at taking your money. "It looks like they're trying to scam them out of their deposit and first months rent," adds Wooten. He says scammers copy and paste the sale ad's from legitimate realtor sites to Craigslist as rental ads. "I can usually tell when one hits craigslist because I’ll usually get 20 to 30 phone calls that day." They then pretend to be out of town on business or personal matters, and give only an email address as a point of contact. Usually they'll ask for money up front on a deal too good to miss. "You'll have a house that's supposed to rent for $950-1000 a month, and they'll have it renting for $600 a month,” says Wooten During our conversation, he shows us text messages from one scammer who says he'll mail the keys to this house if Wooten wires money for a deposit and first months rent. Jo Ann Deal of the Better Business Bureau says scammers are getting better at making themselves out to be realtors. "We’re really concerned for our real estate agents with this scam," says Deal. She says that realtors have to be more on top of their vacant homes in order to protect their businesses. So how can you tell if the house you want is really for rent? She says if the home owner lives out of the country, can't meet face to face or asks for a payment through a money wire it's probably a scam. “There are some catch-lines you watch for,” says Deal. “If the marketing is really good but there's no phone number, no physical address and they will communicate with you only by email and you can do it today, then it's probably a scam." You should always report fishy ad's to Craigslist or the BBB and never send money through a wire transfer.

    Read the article

  • JavaMail application won't send email to external SMTP server

    - by Luiz Cruz
    This is actually a question from an exam, but I believe it could help others troubleshooting a similar situation. In a system, an e-mail needs to be sent to a certain mailbox. The following Java code, which is part of a larger system, was developed for that. Assume that "example.com" corresponds to a valid registered internet domain. public void sendEmail(){ String s1=”Warning”; String b1=”Contact IT support.”; String r1=”[email protected]”; String d1=”[email protected]”; String h1=”mx.intranet”; Properties p1 = new Properties(); p1.put(“mail.host”, h1); Session session = Session.getDefaultInstance(p1, null); MimeMessage message = new MimeMessage(session); try { message.setFrom(new InternetAddress(r1)); message.addRecipient(Message.RecipientType.TO, new InternetAddress(d1)); message.setSubject(s1); message.setText(b1); Transport.send(message); } catch (MessagingException e){ System.err.println(e); } } The execution of this code, within the testing environment of an application server, does NOT work as expected. The mailbox of the "example.com" server never receives the email, even tough all string values in the code are correctly attributed. The output for the command "netstat -np TCP" in the application server during execution is shown bellow: Src Add Src Port Dest Add Dest Port State 192.168.5.5 54395 192.168.7.1 25 SYN_SENT 192.168.5.5 54390 192.168.7.1 110 TIME_WAIT 192.168.5.5 52001 200.218.208.118 80 CLOSE_WAIT 192.168.5.5 52050 200.218.208.118 80 ESTABLISHED 192.168.5.5 50001 200.255.94.202 25 TIME_WAIT 192.168.5.5 50000 200.255.94.202 25 ESTABLISHED With the exception of the lines that were NAT'd, all others are associated with the Java application server, which created them after the execution of the code above. The e-mail server used in this environment is the production server, which is online and does not require any authentication for internal connections. Based on this situation, point out three possible causes for the problem.

    Read the article

  • OSX: Howto start VirtualBox VM on startup?

    - by snies
    The Question How do i start this Wiki VM at the startup of the OSX Server? I am running OSX Server 10.6.8 and VirtualBox 4.1.8 r75467 and a Debian Linux VM (called "wiki"). . What I tried so far Following this article: http://mikkel.hoegh.org/blog/2010/12/23/run-virtualbox-boot-mac-os-x/, i have wrote this plist and placed it in /Library/LaunchDaemons/bar.foo.WikiVirtualBox.plist: <plist version="1.0"> <dict> <key>Label</key> <string>bar.foo.WikiVirtualBox</string> <key>ProgramArguments</key> <array> <string>/usr/bin/VBoxHeadless</string> <string>-s</string> <string>wiki</string> </array> <key>RunAtLoad</key> <true></true> <key>UserName</key> <string>root</string> <key>WorkingDirectory</key> <string>/var/root</string> <key>StandardErrorPath</key> <string>/var/log/bar.foo.WikiVirtualBox.stderr.log</string> <key>StandardOutPath</key> <string>/var/log/bar.foo.WikiVirtualBox.stdout.log</string> </dict> </plist> and told launchd to start it: sudo launchctl load -w /Library/LaunchDaemons/bar.foo.WikiVirtualBox.plist . The Logfile But the VM doesn't start. A Look at tail -f /var/log/system.log shows: sudo[1909]: administrator : TTY=ttys000 ; PWD=/Users/administrator ; USER=root ; COMMAND=/bin/launchctl load -w /Library/LaunchDaemons/bar.foo.WikiVirtualBox.plist VBoxSVC[1914]: 3891612: (connectAndCheck) Untrusted apps are not allowed to connect to or launch Window Server before login. VBoxSVC[1914]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. com.apple.launchd[1] (bar.foo.WikiVirtualBox[1910]): Exited with exit code: 1 When i log into the server via ssh (so no login window opened) i can run: /usr/bin/VBoxHeadless -s wiki and it works. So i don't understand the error above.

    Read the article

  • Does a router have a receiving range?

    - by Aadit M Shah
    So my dad bought a TP-Link router (Model No. TL-WA7510N) which apparently has a transmitting range of 1km; and he believes that it also has a receiving range of 1km. So he's arguing with me that the router (which is a trans-receiver) can communicate with any device in the range of 1km whether or not that device has a transmitting range of 1km. To put it graphically: +----+ 1km +----+ | |------------------------------------------------->| | | TR | | TR | | | <----| | +----+ 100m+----+ So here's the problem: The two devices are 1km apart. The first device has a transmitting range of 1km. The second device only has a transmitting range of 100m. According to my dad the two devices can talk to each other. He says that the first device has a transmitting and a receiving range of 1km which means that it can both send data to devices 1km away and receive data from devices 1km away. To me this makes no sense. If the second device can only send data to devices 100m away then how can the first device catch the transmission? He further argues that for bidirectional communication both the sender and the reciver should have overlapping areas of transmission: According to him if two devices have an overlapping area of transmission then they can communicate. Here neither device has enough transmission power to reach the other. However they have enough receiving power to capture the transmission. Obviously this makes absolutely no sense to me. How can a device sense a transmission which hasn't even reached it yet and go out, capture it and bring it back it. To me a trans-receiver only has a transmission power. It has zero receiving power. Hence for two devices to be able to communicate bidirectionally, the diagram should look like: Hence, from my point of view, both the devices should have a transmission range far enough to reach the other for bidirectional communication to be possible; but no matter how much I try to explain to my dad he adamantly disagrees. So, to put an end to this debate once and for all, who is correct? Is there even such a thing as a receiving range? Can a device fetch a transmission that would otherwise never reach it? I would like a canonical answer on this.

    Read the article

  • Is there a way to permanently arrange 2 displays under XP?

    - by rumtscho
    When I am home or on a business trip, or on a meeting, I use my laptop in the usual way. When I get to work, I put it on the docking station and boot it with the lid closed. The image appears on the two displays connected to the docking station. On the left, there is an old monitor connected over VGA, on the right, a big widescreen connected over DVI. Obviously, the videocard seems to think that the DVI is the primary output, and the VGA the secondary one. Thus Windows always displays the widescreen on the left and the old FSC monitor on the right. So when I want to move the mouse pointer from the (physically) left display to the (physically) right display, I have to move it from right to left, which is a usability nightmare. Of course, I can just drag one display over the other one in the display properties, and then everything is as it should be. The catch: Windows remembers this only as long as it has the two displays. Every time it runs on the laptop display, it forgets the setting. Physically switching the monitors isn't an option, for ergonomical reasons. I prefer to run the more important applications on the bigger screen with the better colourspace, and the shape of my desk forces me to sit off-center, so the more important applications should be shown on the right display. Just switching the video ports doesn't help either. When I connect the big monitor over VGA, image quality deteriorates visibly. So what I do now is: every time I bring the laptop to my desk, I boot it. I wait the whole 7 minutes of XP booting, syncing network drives, etc. Then I fire up the display properties, switch to the last tab, drag the widescreen display to the right, and close. Only then can I start working. Does someone have a better idea? The laptop is a Dell Latitude 630 with Windows XP SP 3. It has an nVidia graphics card (not an onboard chip).

    Read the article

  • What are the pros and cons of AWS Elastic Beanstalk compared with other deployment strategies?

    - by James van Dyke
    I'm pretty new to the whole Netflix OSS stack and deployments in general. As a background for my current level of knowledge ops-wise, my main role is as a front-end application engineer. However, I enjoy the operations side of things, so I'm attempting to setup a new deployment strategy and the tooling for a new project. Our Goals Super easy deploys (we want to push a button to update production) Automated deploys to test environments (using Jenkins) Ease of maintenance (we have an app to write, don't want to spend our time fiddling with production issues) Ability to handle a service oriented architecture (many small apps, various languages and data stores) Enough flexibility to ensure we won't have to change strategies any time soon (we're already trying to get away from RightScale) We're OK with a little more initial setup time if doing so will save us some headaches in the future. So, along these lines, I've been listening to podcasts, watching Ops talks, and reading tons of blog posts and based on our goals and what I've taken to be some forming best practices, we've started forming a plan using Asgard, rolling our package into a jar and rolling that into an AMI. We had this all planned out and like the advantages the process versus using a Chef server and converging instances on the fly (we felt this was error prone given our limited timeline and lack of understanding around a Chef server workflow). However, a coworker did a little looking around on his own and felt like Elastic Beanstalk met our needs. I've looked into it and spun up a test environment with a WAR file and an attached RDS database. Things seem to work and I believe that we can automate deploys to a testing environment using Jenkins via the AWS API. Seems simple enough... perhaps too simple. What I'm wondering is, what's the catch? If Elastic Beanstalk is so simple and effective, why isn't it talked about more? I'm having a hard time finding enough objective opinions and facts about the two different deployment strategies, so I thought I'd ask around. Do you use Elastic Beanstalk? If so, why and what factors lead to that decision? What do you like and dislike? If you don't use Elastic Beanstalk but considered it, what do you use and why didn't you use Elastic Beanstalk? What are the advantages and disadvantages to a Elastic Beanstalk based deployment strategy for an SOA? That is, will Elastic Beanstalk work well with many small applications that rely on each other to work?

    Read the article

  • Handling site not found and page not found with dynamic mass virtual hosting

    - by Rick Moynihan
    I have recently setup mass virtual hosting in Apache so that all we need to do is create a directory to create a new vhost. We're then also using wildcard DNS to map all subdomains to the server running our Apache instance. This works excellently, however I'm now having trouble configuring it to fail-over to an appropriate default/error-page when the vhost directory does not exist. The problem appears to be conflated between by my desire to handle the two error conditions: vhost not found i.e. there was no directory found matching the host supplied in the HTTP host header. I'd like this to display an appropriate site not found error page. The 404 page not found condition of the vhost. Additionally I have a specialised "api" vhost in its own vhost block. I've tried a number of variations and none seem to exhibit the behaviour I want. Here's what I'm working with right now: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /var/www/site-not-found ServerName sitenotfound.mydomain.org ErrorDocument 500 /500.html ErrorDocument 404 /500.html </VirtualHost> <VirtualHost *:80> ServerName api.mydomain.org DocumentRoot /var/www/vhosts/api.mydomain.org/current # other directives, e.g. setting up passenger/rails etc... </VirtualHost> <VirtualHost *:80> # get the server name from the Host: header UseCanonicalName Off VirtualDocumentRoot /var/www/vhosts/%0/current # other directives ... e.g proxy passing to api etc... ErrorDocument 404 /404.html </VirtualHost> My understanding is that the first vhost block is used as the default, so I have this here as my catch all site. Next I have my API vhost, and then finally my mass vhost block. So for a domain that doesn't match the first two ServerName's and has no corresponding directory in /var/www/vhosts/ I'd expect it to fall-over to the first vhost, however with this setup, all domains resolve to my default site-not-found. Why is this? By putting the mass-vhost block first, I can get the mass-vhosts to resolve properly, but not my site-not-found vhost... and in this case I can't seem to find a way to distinguish between a page-level 404 in the vhost, and the case where the VirtualDocumentRoot fails to find a vhost directory (this appears to use the 404 also). Any help out of this bind is much appreciated!

    Read the article

  • Best way to handle PHP sessions across Apache vhost wildcard domains

    - by joshholat
    I'm currently running a site that allows users to use custom domains (i.e. so instead of mysite.com/myaccount, they could have myaccount.com). They just change the A record of their domain and we then use a wildcard vhost on Apache to catch the requests from the custom domains. The setup is basically as seen below. The first vhost catches the mysite.com/myaccount requests and the second would be used for myaccount.com. As you can see, they have the exact same path and php cookie_domain. I've noticed some weird behavior surrounding the line below "#The line below me". When active, the custom domains get a new session_id every page load (that isn't the same as the non-custom domain session). However, when I comment that line out, the user keeps the same session_id on each page load, but that session_id is not the same as the one they'd see on a non-custom domain site either despite being completely on the same server. There is a sort of "hack" workaround involving redirecting the user to mysite.com/myaccount, getting the session ID, redirecting back to myaccount.com, and then using that ID on the myaccount.com. But that can get kind of messy (i.e. if the user logs out of mysite.com/myaccount, how does myaccount.com know?). For what it's worth, I'm using a database to manage the sessions (i.e. so there's no issues with being on different servers, etc, but that's irrelevant since we only use one server to handle all requests currently anyways). I'm fairly certain it is related to some sort of CSRF browser protection thing, but shouldn't it be smart enough to know it's on the same server? Note: These are subdomains, they're separate domains entirely (but on the same server). <VirtualHost *:80> DocumentRoot "/opt/local/www/mysite.com" ServerName mysite.local ErrorLog "/opt/local/apache2/logs/mysite.com-error.log" CustomLog "/opt/local/apache2/logs/mysite.com-access.log" common <Directory "/opt/local/www/mysite.com"> AllowOverride All #php_value session.save_path "/opt/local/www/mysite.com/sessions" php_value session.cookie_domain "mysite.local" php_value auto_prepend_file "/opt/local/www/mysite.com/core.php" </Directory> </VirtualHost> #Wildcard (custom domain) vhost <VirtualHost *:80> DocumentRoot "/opt/local/www/mysite.com" ServerName default ServerAlias * ErrorLog "/opt/local/apache2/logs/mysite.com-error.log" CustomLog "/opt/local/apache2/logs/mysite.com-access.log" common <Directory "/opt/local/www/mysite.com"> AllowOverride All #php_value session.save_path "/opt/local/www/mysite.com/sessions" # The line below me php_value session.cookie_domain "mysite.local" php_value auto_prepend_file "/opt/local/www/mysite.com/core.php" </Directory> </VirtualHost>

    Read the article

  • Intel RST accidentally selected wrong drive as system drive -- how to fix?

    - by Sean Killeen
    Question / TL;DR If Intel RST has marked a drive other than my RAID set as the system drive, how can I get it so that the RAID set is now seen as the system drive, and catch it up to my drive now? What Happened NOTE: Some perhaps unwise decisions are ahead. This is as best as I can recall the order of things. I had a 2x1TB RAID1 config. I bought the drives around the same time, and they started to die around the same time. I replaced 1st drive with a 2 TB drive before the other one's SMART errors got more serious. I waited for the RAID to replicate, then replaced the 2nd drive with a manufacturer's replacement. I got a second manufacturer's drive replacement and used it as a spare. so I now I had a 1TB/2TB drive in a RAID1 and another 1TB as a spare. The 1TB drive in the replacement set was bad from the manufacturer. Rather than mess with their refurbished stuff, I bought another 2 TB drive an upped the config to a 2x2TB RAID1 with the other, functioning manufacturer's drive as a spare. I made the mistake of trying to bring the other drive online to clean it out and the signatue clash killed my machine. When the machine rebooted, that drive was marked as the system drive. So, I have a 2x2TB RAID1 that is apparently offline, and 1 spare 1 TB refurbished drive that everything is being run from. Not a great idea. Options I'm considering Bring the 2x2TB drive back online, and then unplug the spare until I can format it in another system. This would involve some data loss, but the more I think about it, I actually think I haven't modified any data that isn't backed up or synced somewhere (go me!) Anything that isn't is likely trivial, enough that I'm willing to take the risk. One downside here is that if the 2 TB doesn't have data on it for some reason, I could be screwed trying to put the other drive back in, no? Try to somehow get the RAID1 updated with the data from the current system drive. Option 3?

    Read the article

  • TFS2010 Hangs “Waiting for Build Agent”

    - by Qpirate
    I have asked this question over on SO the link to the question is here but i am hoping this is a better place to ask it. I have 3 VM's each running the TFS Build Host Service 1 has 1 controller and 1 agent 2 have 2 Build Agents each. Most of the time (7\10 builds) it comes back with the following error message TF215097: An error occurred while initializing a build for build definition BUILD_DEFINITION: There was no endpoint listening at http://MACHINE1:9191/Build/v3.0/Services/Controller/14 that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. and there is no errors when i do get this message. the following is the config file that i have created <configuration> <appSettings> <add key="traceWriter" value="true"/> </appSettings> <system.diagnostics> <switches> <add name="BuildServiceTraceLevel" value="4"/> <add name="API" value="4"/> <add name="Authentication" value="4"/> <add name="Authorization" value="4"/> <add name="Database" value="4"/> <add name="General" value="4"/> <add name="traceLevel" value="4"/> </switches> <trace autoflush="true" indentsize="4"> <listeners> <add name="myListener" type="Microsoft.TeamFoundation.TeamFoundationTextWriterTraceListener,Microsoft.TeamFoundation.Common, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" initializeData="c:\logs\TFSBuildServiceHost.exe.log" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> </configuration> I do have my own custom activities in my build process but this does not seem to be a problem as sometimes the build actually does go. I have tried refreshing the template as some sites suggest. Has anyone come across a solution for this problem? or can anyone tell me how to catch these errors when they happen?

    Read the article

  • Postfix to deliver mail to a virtual address mailbox

    - by Chloe
    Postfix version 2.6.6, Dovecot Version 2.0.9 I want to setup Postfix + Dovecot. Dovecot seems to be working. I can authenticate. However, the mailbox is empty! Nothing will get delivered! I followed many tutorials on Postfix + Dovecot but they seem to want to complicate things by using Dovecot LDA or MySQL. I just want it to be very simple and having Postfix deliver to the virtual mail boxes are fine. I don't need MySQL either. I already set up a custom password file that Dovecot uses for authentication and I can login to POP3 with SSL. I can see from the logs that Postfix is delivering to the system user accounts (the catch-all), instead of the virtual users that I set up in Dovecot. The SMTP + SSL authentication seems to work also. I can also see from the logs that Dovecot is checking the correct virtual mail folder. I just need to figure out how to get Postfix to deliver to the virtual mail boxes. I have the following which I believe are relevant. Let me know what other settings you need to see: alias_maps = hash:/etc/aliases mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = xxx.com myhostname = mail.xxx.com mynetworks = 99.99.99.99, 99.99.99.99 myorigin = $mydomain relay_domains = $mydestination, xxx.com, domain2.net, domain3.com sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = reject_non_fqdn_sender reject_non_fqdn_recipient reject_unknown_recipient_domain permit_sasl_authenticated check_relay_domains smtpd_sasl_auth_enable = yes smtpd_sasl_path = private/auth smtpd_sasl_type = dovecot smtpd_sender_restrictions = check_sender_mx_access cidr:/etc/postfix/bogus_mx reject_invalid_hostname reject_unknown_sender_domain reject_non_fqdn_sender virtual_mailbox_base = /var/spool/vmail virtual_mailbox_domains = xxx.com, domain2.net, domain3.com virtual_minimum_uid = 444 Postfix master.cf: submission inet n - - - - smtpd -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_sasl_type=dovecot -o smtpd_sasl_path=private/auth -o smtpd_sasl_security_options=noanonymous -o smtpd_sasl_local_domain=$myhostname -o smtpd_client_restrictions=permit_sasl_authenticated,reject -o smtpd_sender_login_maps=hash:/etc/postfix/virtual -o smtpd_sender_restrictions=reject_sender_login_mismatch -o smtpd_recipient_restrictions=reject_non_fqdn_recipient,reject_unknown_recipient_domain,permit_sasl_authenticated,reject Dovecot related: mail_location = maildir:~/Maildir passdb { args = /etc/dovecot/users.conf driver = passwd-file } service auth { unix_listener /var/spool/postfix/private/auth { mode = 0660 user = postfix } } The virtual mail user: vmail:x:444:99:virtual mail users:/var/spool/vmail:/sbin/nologin Here is the /var/log/maillog when I try to send something to myself: Oct 25 22:10:05 308321 postfix/smtpd[2200]: connect from user-999.cable.mindspring.com[99.99.99.99] Oct 25 22:10:05 308321 postfix/smtpd[2200]: D224BD4753: client=user-999.cable.mindspring.com[99.99.99.99], sasl_method=LOGIN, [email protected] Oct 25 22:10:06 308321 postfix/cleanup[2207]: D224BD4753: message-id=<7DC3C163CFFC483AB6226F8D3D9969D2@dumbopc> Oct 25 22:10:06 308321 postfix/qmgr[2168]: D224BD4753: from=<[email protected]>, size=1385, nrcpt=1 (queue active) Oct 25 22:10:06 308321 postfix/smtpd[2200]: disconnect from user-999.cable.mindspring.com[99.99.99.99] Oct 25 22:10:06 308321 postfix/local[2208]: D224BD4753: to=<[email protected]>, orig_to=<[email protected]>, relay=local, delay=1.1, delays=0.53/0.02/0/0.51, dsn=2.0.0, status=sent (delivered to mailbox) Oct 25 22:10:06 308321 postfix/qmgr[2168]: D224BD4753: removed

    Read the article

  • What the best way to achieve RPO of zero and lowest possible RTO (less than 15 minutes) with SQL 2008 R2?

    - by Adrian Hope-Bailie
    We are running a payments (EFT transaction processing) application which is processing high volumes of transactions 24/7 and are currently investigating a better way of doing DB replication to our disaster recovery site. Our current and previous strategies have included using both DoubleTake and Redgate to replicate data to a warm stand-by. DoubleTake is the supported solution from the payments software vendor however their (DoubleTake's) support in South Africa is very poor. We had a few issues and simply couldn't ever resolve them so we had to give up on DoubleTake. We have been using Redgate to manually read the data from the primary site (via queries) and write to the DR site but this is: A bad solution Getting the software vendor hot and bothered whenever we have support issues as it has a tendency to interfere with the payment application which is very DB intensive. We recently upgraded the whole system to run on SQL 2008 R2 Enterprise which means we should probably be looking at using some of the built-in replication features. The server has 2 fairly large databases with a mixture of tables containing highly volatile transactional data and pretty static configuration data. Replication would be done over a WAN link to a separate physical site and needs to achieve the following objectives. RPO: Zero loss - This is transactional data with financial impact so we can't lose anything. RTO: Tending to zero - The business depends on our ability to process transactions every minute we are down we are losing money I have looked at a few of the other questions/answers but none meet our case exactly: SQL Server 2008 failover strategy - Log shipping or replication? How to achieve the following RTO & RPO with logshipping only using SQL Server? What is the best of two approaches to achieve DB Replication? My current thinking is that we should use mirroring but I am concerned that for RPO:0 we will need to do delayed commits and this could impact the performance of the primary DB which is not an option. Our current DR process is to: Stop incoming traffic to the primary site and allow all in-flight transaction to complete. Allow the replication to DR to complete. Change network routing to route to DR site. Start all applications and services on the secondary site (Ideally we can change this to a warmer stand-by whereby the applications are already running but not processing any transactions). In other words the DR database needs to, as quickly as possible, catch up with primary and be ready for processing as the new primary. We would then need to be able to reverse this when we are ready to switch back. Is there a better option than mirroring (should we be doing log-shipping too) and can anyone suggest other considerations that we should keep in mind?

    Read the article

  • nginx rewrite for /blah/(.*) /$1

    - by skrewler
    I'm migrating from mod_php to nginx. I got everything working except for this rewrite.. I'm just not familiar enough with nginx configuration to know the correct way to do this. I came up with this by looking at a sample on the nginx site. server { server_name test01.www.myhost.com; root /home/vhosts/my_home/blah; access_log /var/log/nginx/blah.access.log; error_log /var/log/nginx/blah.error.log; index index.php; location / { try_files $uri $uri/ @rewrites; } location @rewrites { rewrite ^ /index.php last; rewrite ^/ht/userGreeting.php /js/iFrame/index.php last; rewrite ^/ht/(.*)$ /$1 last; rewrite ^/userGreeting.php$ /js/iFrame/index.php last; rewrite ^/a$ /adminLogin.php last; rewrite ^/boom\/(.*)$ /boom/index.php?q=$1 last; rewrite ^favicon.ico$ favico_ry.ico last; } # This block will catch static file requests, such as images, css, js # The ?: prefix is a 'non-capturing' mark, meaning we do not require # the pattern to be captured into $1 which should help improve performance location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { # Some basic cache-control for static files to be sent to the browser expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } include php.conf; } The issue I'm having is with this rewrite: rewrite ^ht\/(.*)$ /$1 last; 99% of requests that will hit this rewrite are static files. So I think maybe it's getting sent to the static files section and that's where things are being messed up? I tried adding this but it didn't work: location ~* ^ht\/.*\.(?:ico|css|js|gif|jpe?g|png)$ { # Some basic cache-control for static files to be sent to the browser expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } Any help would be appreciated. I know the best thing to do would be to just change the references of /ht/whatever.jpg to /whatever.jpg in the code.. but that's not an option for now.

    Read the article

  • Single domain name potentially resolving to multiple servers

    - by Jace
    first time here at Server Fault, and I apologize in advance that this domain stuff is not really my strength. Any and all suggestions are much appreciated. I am completely lost and incredibly tired! I've inherited an incredibly convoluted system from my predecessor, and I'm trying to find a way to solve it - or I need to be told that it just isn't possible. I've got an old site on ServerA (some kind of Linux distribution), with the domain SomeDomain.com There is a new site sitting on ServerB (Ubuntu), with the intention of having SomeDomain.com to serve it in the future (it is replacing the old site) ServerA also has a web app that is currently in use by other departments within the company (accessible at SomeDomain.com/web-app/) The goal: To have SomeDomain.com and all extensions of this domain name (sub-domains, URL's etc.) serve the new site on ServerB. BUT, the URL SomeDomain.com/web-app/ must serve the Web App on ServerA. The Catch: The ServerA is a shared server with a hosting company with VERY limiting restrictions in place - I cannot adjust DNS settings (apart from Name servers - but cannot set A records or anything, I have full access to ServerB to do as I wish). Therefore the web-app MUST be served from SomeDomain.com/web-app/ and not from a sub-domain or anything. These limitations make migrating the web-app from Server A to Server B rather undesirable, AND this web-app will be replaced in the near future, so it isn't worth the effort right now. Therefore, ultimately I will want 1 domain name to resolve to Server B's IP address most of the time, but in the event that the URL is SomeDomain.com/web-app/, it should resolve to Server A's IP. Note: The domain names don't, technically, have to resolve to one IP or another - but ultimately the URL's must stay consistent Some things I have tried: I've looked into mod_rewrite and .htaccess to try and achieve this effect, but it doesn't look like it's going to work for me - but I may have done it wrong (On Server B, I just checked if the request URI was /web-app/ and tried to serve the /web-app/ folder on Server A) I do have the ability to modify the name servers on both servers I am not able to make a sub domain on Server A that points back to Server A (I assume because the hosting company's servers use the URL to determine what site the serve). I figured this could be good as I'd could set an A record on Server B to point to the web app on Server A - but alas, Server A requires SomeDomain.com. If there is any more information I can give, please let me know. I need a nudge in the right direction, ideas or a solution.

    Read the article

  • What happens to missed writes after a zpool clear?

    - by Kevin
    I am trying to understand ZFS' behaviour under a specific condition, but the documentation is not very explicit about this so I'm left guessing. Suppose we have a zpool with redundancy. Take the following sequence of events: A problem arises in the connection between device D and the server. This causes a large number of failures and ZFS therefore faults the device, putting the pool in degraded state. While the pool is in degraded state, the pool is mutated (data is written and/or changed.) The connectivity issue is physically repaired such that device D is reliable again. Knowing that most data on D is valid, and not wanting to stress the pool with a resilver needlessly, the admin instead runs zpool clear pool D. This is indicated by Oracle's documentation as the appropriate action where the fault was due to a transient problem that has been corrected. I've read that zpool clear only clears the error counter, and restores the device to online status. However, this is a bit troubling, because if that's all it does, it will leave the pool in an inconsistent state! This is because mutations in step 2 will not have been successfully written to D. Instead, D will reflect the state of the pool prior to the connectivity failure. This is of course not the normative state for a zpool and could lead to hard data loss upon failure of another device - however, the pool status will not reflect this issue! I would at least assume based on ZFS' robust integrity mechanisms that an attempt to read the mutated data from D would catch the mistakes and repair them. However, this raises two problems: Reads are not guaranteed to hit all mutations unless a scrub is done; and Once ZFS does hit the mutated data, it (I'm guessing) might fault the drive again because it would appear to ZFS to be corrupting data, since it doesn't remember the previous write failures. Theoretically, ZFS could circumvent this problem by keeping track of mutations that occur during a degraded state, and writing them back to D when it's cleared. For some reason I suspect that's not what happens, though. I'm hoping someone with intimate knowledge of ZFS can shed some light on this aspect.

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >