Search Results

Search found 12762 results on 511 pages for 'inverted index'.

Page 160/511 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Creating a Training Lab on Windows Azure

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/06/17/153149.aspxThis week we are preparing for a training course that Alan Smith will be running for the support teams at one of my customers around Windows Azure. In order to facilitate the training lab we have a few prerequisites we need to handle. One of the biggest ones is that although the support team all have MSDN accounts the local desktops they work on are not ideal for running most of the labs as we want to give them some additional developer background training around Azure. Some recent Azure announcements really help us in this area: MSDN software can now be used on Azure VM You don't pay for Azure VM's when they are no longer used  Since the support team only have limited experience of Windows Azure and the organisation also have an Enterprise Agreement we decided it would be best value for money to spin up a training lab in a subscription on the EA and then we can turn the machines off when we are done. At the same time we would be able to spin them back up when the users need to do some additional lab work once the training course is completed. In order to achieve this I wanted to create a powershell script which would setup my training lab. The aim was to create 18 VM's which would be based on a prebuilt template with Visual Studio and the Azure development tools. The script I used is described below The Start & Variables The below text will setup the powershell environment and some variables which I will use elsewhere in the script. It will also import the Azure Powershell cmdlets. You can see below that I will need to download my publisher settings file and know some details from my Azure account. At this point I will assume you have a basic understanding of Azure & Powershell so already know how to do this. Set-ExecutionPolicy Unrestrictedcls $startTime = get-dateImport-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1"# Azure Publisher Settings $azurePublisherSettings = '<Your settings file>.publishsettings'  # Subscription Details $subscriptionName = "<Your subscription name>" $defaultStorageAccount = "<Your default storage account>"  # Affinity Group Details $affinityGroup = '<Your affinity group>' $dataCenter = 'West Europe' # From Get-AzureLocation  # VM Details $baseVMName = 'TRN' $adminUserName = '<Your admin username>' $password = '<Your admin password>' $size = 'Medium' $vmTemplate = '<The name of your VM template image>' $rdpFilePath = '<File path to save RDP files to>' $machineSettingsPath = '<File path to save machine info to>'    Functions In the next section of the script I have some functions which are used to perform certain actions. The first is called CreateVM. This will do the following actions: If the VM already exists it will be deleted Create the cloud service Create the VM from the template I have created Add an endpoint so we can RDP to them all over the same port Download the RDP file so there is a short cut the trainees can easily access the machine via Write settings for the machine to a log file  function CreateVM($machineNo) { # Specify a name for the new VM $machineName = "$baseVMName-$machineNo" Write-Host "Creating VM: $machineName"       # Get the Azure VM Image      $myImage = Get-AzureVMImage $vmTemplate   #If the VM already exists delete and re-create it $existingVm = Get-AzureVM -Name $machineName -ServiceName $serviceName if($existingVm -ne $null) { Write-Host "VM already exists so deleting it" Remove-AzureVM -Name $machineName -ServiceName $serviceName }   "Creating Service" $serviceName = "bupa-azure-train-$machineName" Remove-AzureService -Force -ServiceName $serviceName New-AzureService -Location $dataCenter -ServiceName $serviceName   Write-Host "Creating VM: $machineName" New-AzureQuickVM -Windows -name $machineName -ServiceName $serviceName -ImageName $myImage.ImageName -InstanceSize $size -AdminUsername $adminUserName -Password $password  Write-Host "Updating the RDP endpoint for $machineName" Get-AzureVM -name $machineName -ServiceName $serviceName ` | Add-AzureEndpoint -Name RDP -Protocol TCP -LocalPort 3389 -PublicPort 550 ` | Update-AzureVM    Write-Host "Get the RDP File for machine $machineName" $machineRDPFilePath = "$rdpFilePath\$machineName.rdp" Get-AzureRemoteDesktopFile -name $machineName -ServiceName $serviceName -LocalPath "$machineRDPFilePath"   WriteMachineSettings "$machineName" "$serviceName" }    The delete machine settings function is used to delete the log file before we start re-running the process.  function DeleteMachineSettings() { Write-Host "Deleting the machine settings output file" [System.IO.File]::Delete("$machineSettingsPath"); }    The write machine settings function will get the VM and then record its details to the log file. The importance of the log file is that I can easily provide the information for all of the VM's to our infrastructure team to be able to configure access to all of the VM's    function WriteMachineSettings([string]$vmName, [string]$vmServiceName) { Write-Host "Writing to the machine settings output file"   $vm = Get-AzureVM -name $vmName -ServiceName $vmServiceName $vmEndpoint = Get-AzureEndpoint -VM $vm -Name RDP   $sb = new-object System.Text.StringBuilder $sb.Append("Service Name: "); $sb.Append($vm.ServiceName); $sb.Append(", "); $sb.Append("VM: "); $sb.Append($vm.Name); $sb.Append(", "); $sb.Append("RDP Public Port: "); $sb.Append($vmEndpoint.Port); $sb.Append(", "); $sb.Append("Public DNS: "); $sb.Append($vmEndpoint.Vip); $sb.AppendLine(""); [System.IO.File]::AppendAllText($machineSettingsPath, $sb.ToString());  } # end functions    Rest of Script In the rest of the script it is really just the bit that orchestrates the actions we want to happen. It will load the publisher settings, select the Azure subscription and then loop around the CreateVM function and create 16 VM's  Import-AzurePublishSettingsFile $azurePublisherSettings Set-AzureSubscription -SubscriptionName $subscriptionName -CurrentStorageAccount $defaultStorageAccount Select-AzureSubscription -SubscriptionName $subscriptionName  DeleteMachineSettings    "Starting creating Bupa International Azure Training Lab" $numberOfVMs = 16  for ($index=1; $index -le $numberOfVMs; $index++) { $vmNo = "$index" CreateVM($vmNo); }    "Finished creating Bupa International Azure Training Lab" # Give it a Minute Start-Sleep -s 60  $endTime = get-date "Script run time " + ($endTime - $startTime)    Conclusion As you can see there is nothing too fancy about this script but in our case of creating a small isolated training lab which is not connected to our corporate network then we can easily use this to provision the lab. Im sure if this is of use to anyone you can easily modify it to do other things with the lab environment too. A couple of points to note are that there are some soft limits in Azure about the number of cores and services your subscription can use. You may need to contact the Azure support team to be able to increase this limit. In terms of the real business value of this approach, it was not possible to use the existing desktops to do the training on, and getting some internal virtual machines would have been relatively expensive and time consuming for our ops team to do. With the Azure option we are able to spin these machines up for a temporary period during the training course and then throw them away when we are done. We expect the costing of this test lab to be very small, especially considering we have EA pricing. As a ball park I think my 18 lab VM training environment will cost in the region of $80 per day on our EA. This is a fraction of the cost of the creation of a single VM on premise.

    Read the article

  • Ignoring focusLost(), SWT.Verify, or other SWT listeners in Java code.

    - by Zoot
    Outside of the actual SWT listener, is there any way to ignore a listener via code? For example, I have a java program that implements SWT Text Widgets, and the widgets have: SWT.Verify listeners to filter out unwanted text input. ModifyListeners to wait for the correct number of valid input characters and automatically set focus (using setFocus())to the next valid field, skipping the other text widgets in the tab order. focusLost(FocusEvent) FocusListeners that wait for the loss of focus from the text widget to perform additional input verification and execute an SQL query based on the user input. The issue I run into is clearing the text widgets. One of the widgets has the format "####-##" (Four Numbers, a hyphen, then two numbers) and I have implemented this listener, which is a modified version of SWT Snippet Snippet179. The initial text for this text widget is " - " to provide visual feedback to the user as to the expected format. Only numbers are acceptable input, and the program automatically skips past the hyphen at the appropriate point. /* * This listener was adapted from the "verify input in a template (YYYY/MM/DD)" SWT Code * Snippet (also known as Snippet179), from the Snippets page of the SWT Project. * SWT Code Snippets can be found at: * http://www.eclipse.org/swt/snippets/ */ textBox.addListener(SWT.Verify, new Listener() { boolean ignore; public void handleEvent(Event e) { if (ignore) return; e.doit = false; StringBuffer buffer = new StringBuffer(e.text); char[] chars = new char[buffer.length()]; buffer.getChars(0, chars.length, chars, 0); if (e.character == '\b') { for (int i = e.start; i < e.end; i++) { switch (i) { case 0: /* [x]xxx-xx */ case 1: /* x[x]xx-xx */ case 2: /* xx[x]x-xx */ case 3: /* xxx[x]-xx */ case 5: /* xxxx-[x]x */ case 6: /* xxxx-x[x] */ { buffer.append(' '); break; } case 4: /* xxxx[-]xx */ { buffer.append('-'); break; } default: return; } } textBox.setSelection(e.start, e.start + buffer.length()); ignore = true; textBox.insert(buffer.toString()); ignore = false; textBox.setSelection(e.start, e.start); return; } int start = e.start; if (start > 6) return; int index = 0; for (int i = 0; i < chars.length; i++) { if (start + index == 4) { if (chars[i] == '-') { index++; continue; } buffer.insert(index++, '-'); } if (chars[i] < '0' || '9' < chars[i]) return; index++; } String newText = buffer.toString(); int length = newText.length(); textBox.setSelection(e.start, e.start + length); ignore = true; textBox.insert(newText); ignore = false; /* * After a valid key press, verifying if the input is completed * and passing the cursor to the next text box. */ if (7 == textBox.getCaretPosition()) { /* * Attempting to change the text after receiving a known valid input that has no results (0000-00). */ if ("0000-00".equals(textBox.getText())) { // "0000-00" is the special "Erase Me" code for these text boxes. ignore = true; textBox.setText(" - "); ignore = false; } // Changing focus to a different textBox by using "setFocus()" method. differentTextBox.setFocus(); } } } ); As you can see, the only method I've figured out to clear this text widget from a different point in the code is by assigning "0000-00" textBox.setText("000000") and checking for that input in the listener. When that input is received, the listener changes the text back to " - " (four spaces, a hyphen, then two spaces). There is also a focusLost Listener that parses this text widget for spaces, then in order to avoid unnecessary SQL queries, it clears/resets all fields if the input is invalid (i.e contains spaces). // Adding focus listener to textBox to wait for loss of focus to perform SQL statement. textBox.addFocusListener(new FocusAdapter() { @Override public void focusLost(FocusEvent evt) { // Get the contents of otherTextBox and textBox. (otherTextBox must be <= textBox) String boxFour = otherTextBox.getText(); String boxFive = textBox.getText(); // If either text box has spaces in it, don't perform the search. if (boxFour.contains(" ") || boxFive.contains(" ")) { // Don't perform SQL statements. Debug statement. System.out.println("Tray Position input contains spaces. Ignoring."); //Make all previous results invisible, if any. labels.setVisible(false); differentTextBox.setText(""); labelResults.setVisible(false); } else { //... Perform SQL statement ... } } } ); OK. Often, I use SWT MessageBox widgets in this code to communicate to the user, or wish to change the text widgets back to an empty state after verifying the input. The problem is that messageboxes seem to create a focusLost event, and using the .setText(string) method is subject to SWT.Verify listeners that are present on the text widget. Any suggestions as to selectively ignoring these listeners in code, but keeping them present for all other user input? Thank you in advance for your assistance.

    Read the article

  • User jQuery to get nested elements from XML

    - by Dkong
    I'm spinning my wheels on this. How do I get the values from the following nested elements from the XML below (I've also put my code below)? I am after the "descShort" value and then the capital "Last" and capital "change" : <indices> <index> <code>DJI</code> <exchange>NYSE</exchange> <liveness>DELAYED</liveness> <indexDesc> <desc>Dow Jones Industrials</desc> <descAbbrev>DOW JONES</descAbbrev> <descShort>DOW JONES</descShort> <firstActive></firstActive> <lastActive></lastActive> </indexDesc> <indexQuote> <capital> <first>11144.57</first> <high>11153.79</high> <low>10973.92</low> <last>11018.66</last> <change>-125.9</change> <pctChange>-1.1%</pctChange> </capital> <gross> <first>11144.57</first> <high>11153.79</high> <low>10973.92</low> <last>11018.66</last> <change>-125.9</change> <pctChange>-1.1%</pctChange> </gross> <totalEvents>4</totalEvents> <lastChanged>16-Apr-2010 16:03:00</lastChanged> </indexQuote> </index> <index> <code>XAO</code> <exchange>ASX</exchange> <liveness>DELAYED</liveness> <indexDesc> <desc>ASX All Ordinaries</desc> <descAbbrev>All Ordinaries</descAbbrev> <descShort>ALL ORDS</descShort> <firstActive>06-Mar-1970</firstActive> <lastActive></lastActive> </indexDesc> <indexQuote> <capital> <first>5007.30</first> <high>5007.30</high> <low>4934.00</low> <last>4939.40</last> <change>-67.9</change> <pctChange>-1.4%</pctChange> </capital> <gross> <first>5007.30</first> <high>5007.30</high> <low>4934.00</low> <last>4939.40</last> <change>-67.9</change> <pctChange>-1.4%</pctChange> </gross> <totalEvents>997</totalEvents> <lastChanged>19-Apr-2010 17:02:54</lastChanged> </indexQuote> </index> $.ajax({ type: "GET", url: "stockindices.xml", dataType: "xml", success: function(xml) { $(xml).find('index').each(function(){ var self = $(this); var code = self.find('indexDesc'); $(code).find('indexDesc').each(function(){ alert(self.find('descShort').text()); }); $('<span class=\"tickerItem\"></span>').html(values[0].text()).appendTo('#marq'); }); } });

    Read the article

  • How to implement smooth flocking

    - by Craig
    I'm working on a simple survival game, avoid the big guy and chase the the small guys to stay alive for as long as possible. I have taken the chase and evade example from MSDN create and drawn 20 mice on the screen. I want the small guys to flock when they arent evading. They are doing this, but it isnt as smooth as I would like it to be. How do i make the movement smoother? Its very jittery.# Below is what I have going at the moment, flocking code is within the IF statement, when it isnt set to evading. Any help would be greatly appreciated! :) namespace ChaseAndEvade { class MouseSprite { public enum MouseAiState { // evading the cat Evading, // the mouse can't see the "cat", and it's wandering around. Wander } // how fast can the mouse move? public float MaxMouseSpeed = 4.5f; // and how fast can it turn? public float MouseTurnSpeed = 0.20f; // MouseEvadeDistance controls the distance at which the mouse will flee from // cat. If the mouse is further than "MouseEvadeDistance" pixels away, he will // consider himself safe. public float MouseEvadeDistance = 100.0f; // this constant is similar to TankHysteresis. The value is larger than the // tank's hysteresis value because the mouse is faster than the tank: with a // higher velocity, small fluctuations are much more visible. public float MouseHysteresis = 60.0f; public Texture2D mouseTexture; public Vector2 mouseTextureCenter; public Vector2 mousePosition; public MouseAiState mouseState = MouseAiState.Wander; public float mouseOrientation; public Vector2 mouseWanderDirection; int separationImpact = 4; int cohesionImpact = 6; int alignmentImpact = 2; int sensorDistance = 50; public void UpdateMouse(Vector2 position, MouseSprite [] mice, int numberMice, int index) { Vector2 catPosition = position; int enemies = numberMice; // first, calculate how far away the mouse is from the cat, and use that // information to decide how to behave. If they are too close, the mouse // will switch to "active" mode - fleeing. if they are far apart, the mouse // will switch to "idle" mode, where it roams around the screen. // we use a hysteresis constant in the decision making process, as described // in the accompanying doc file. float distanceFromCat = Vector2.Distance(mousePosition, catPosition); // the cat is a safe distance away, so the mouse should idle: if (distanceFromCat > MouseEvadeDistance + MouseHysteresis) { mouseState = MouseAiState.Wander; } // the cat is too close; the mouse should run: else if (distanceFromCat < MouseEvadeDistance - MouseHysteresis) { mouseState = MouseAiState.Evading; } // if neither of those if blocks hit, we are in the "hysteresis" range, // and the mouse will continue doing whatever it is doing now. // the mouse will move at a different speed depending on what state it // is in. when idle it won't move at full speed, but when actively evading // it will move as fast as it can. this variable is used to track which // speed the mouse should be moving. float currentMouseSpeed; // the second step of the Update is to change the mouse's orientation based // on its current state. if (mouseState == MouseAiState.Evading) { // If the mouse is "active," it is trying to evade the cat. The evasion // behavior is accomplished by using the TurnToFace function to turn // towards a point on a straight line facing away from the cat. In other // words, if the cat is point A, and the mouse is point B, the "seek // point" is C. // C // B // A Vector2 seekPosition = 2 * mousePosition - catPosition; // Use the TurnToFace function, which we introduced in the AI Series 1: // Aiming sample, to turn the mouse towards the seekPosition. Now when // the mouse moves forward, it'll be trying to move in a straight line // away from the cat. mouseOrientation = ChaseAndEvadeGame.TurnToFace(mousePosition, seekPosition, mouseOrientation, MouseTurnSpeed); // set currentMouseSpeed to MaxMouseSpeed - the mouse should run as fast // as it can. currentMouseSpeed = MaxMouseSpeed; } else { // if the mouse isn't trying to evade the cat, it should just meander // around the screen. we'll use the Wander function, which the mouse and // tank share, to accomplish this. mouseWanderDirection and // mouseOrientation are passed by ref so that the wander function can // modify them. for more information on ref parameters, see // http://msdn2.microsoft.com/en-us/library/14akc2c7(VS.80).aspx ChaseAndEvadeGame.Wander(mousePosition, ref mouseWanderDirection, ref mouseOrientation, MouseTurnSpeed); // if the mouse is wandering, it should only move at 25% of its maximum // speed. currentMouseSpeed = .25f * MaxMouseSpeed; Vector2 separate = Vector2.Zero; Vector2 moveCloser = Vector2.Zero; Vector2 moveAligned = Vector2.Zero; // What the AI does when it sees other AIs for (int j = 0; j < enemies; j++) { if (index != j) { // Calculate a vector towards another AI Vector2 separation = mice[index].mousePosition - mice[j].mousePosition; // Only react if other AI is within a certain distance if ((separation.Length() < this.sensorDistance) & (separation.Length()> 0) ) { moveAligned += mice[j].mouseWanderDirection; float distance = Math.Abs(separation.Length()); if (distance == 0) distance = 1; moveCloser += mice[j].mousePosition; separation.Normalize(); separate += separation / distance; } } } if (moveAligned.LengthSquared() != 0) { moveAligned.Normalize(); } if (moveCloser.LengthSquared() != 0) { moveCloser.Normalize(); } moveCloser /= enemies; mice[index].mousePosition += (separate * separationImpact) + (moveCloser * cohesionImpact) + (moveAligned * alignmentImpact); } // The final step is to move the mouse forward based on its current // orientation. First, we construct a "heading" vector from the orientation // angle. To do this, we'll use Cosine and Sine to tell us the x and y // components of the heading vector. See the accompanying doc for more // information. Vector2 heading = new Vector2( (float)Math.Cos(mouseOrientation), (float)Math.Sin(mouseOrientation)); // by multiplying the heading and speed, we can get a velocity vector. the // velocity vector is then added to the mouse's current position, moving him // forward. mousePosition += heading * currentMouseSpeed; } } }

    Read the article

  • MVC Razor Engine For Beginners Part 1

    - by Humprey Cogay, C|EH, E|CSA
    I. What is MVC? a. http://www.asp.net/mvc/tutorials/older-versions/overview/asp-net-mvc-overview II. Software Requirements for this tutorial a. Visual Studio 2010/2012. You can get your free copy here Microsoft Visual Studio 2012 b. MVC Framework Option 1 - Install using a standalone installer http://www.microsoft.com/en-us/download/details.aspx?id=30683 Option 2 - Install using Web Platform Installer http://www.microsoft.com/web/handlers/webpi.ashx?command=getinstallerredirect&appid=MVC4VS2010_Loc III. Creating your first MVC4 Application a. On the Visual Studio click file new solution link b. Click Other Project Type>Visual Studio Solutions and on the templates window select blank solution and let us name our solution MVCPrimer. c. Now Click File>New and select Project d. Select Visual C#>Web> and select ASP.NET MVC 4 Web Application and Enter MyWebSite as Name e. Select Empty, Razor as view engine and uncheck Create a Unit test project f. You can now view a basic MVC 4 Application Structure on your solution explorer g. Now we will add our first controller by right clicking on the controllers folder on your solution explorer and select Add>Controller h. Change the name of the controller to HomeController and under the scaffolding options select Empty MVC Controller. i. You will now see a basic controller with an Index method that returns an ActionResult j. We will now add a new View Folder for our Home Controller. Right click on the views folder on your solution explorer and select Add> New Folder> and name this folder Home k. Add a new View by right clicking on Views>Home Folder and select Add View. l. Name the view Index, and select Razor(CSHTML) as View Engine, All checkbox should be unchecked for now and click add. m. Relationship between our HomeController and Home Views Sub Folder n. Add new HTML Contents to our newly created Index View o. Press F5 to run our MVC Application p. We will create our new model, Right click on the models folder of our solution explorer and select Add> Class. q. Let us name our class Customer r. Edit the Customer class with the following code s. Open the HomeController by double clickin HomeController of our Controllers folder and edit the HomeControllerusing System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc;   namespace MyWebSite.Controllers {     public class HomeController : Controller     {         //         // GET: /Home/           public ActionResult Index()         {             return View();         }           public ActionResult ListCustomers()         {             List<Models.Customer> customers = new List<Models.Customer>();               //Add First Customer to Our Collection             customers.Add(new Models.Customer()                     {                         Id = 1,                         CompanyName = "Volvo",                         ContactNo = "123-0123-0001",                         ContactPerson = "Gustav Larson",                         Description = "Volvo Car Corporation, or Volvo Personvagnar AB, is a Scandinavian automobile manufacturer founded in 1927"                     });                 //Add Second Customer to Our Collection             customers.Add(new Models.Customer()                     {                         Id = 2,                         CompanyName = "BMW",                         ContactNo = "999-9876-9898",                         ContactPerson = "Franz Josef Popp",                         Description = "Bayerische Motoren Werke AG,  (BMW; English: Bavarian Motor Works) is a " +                                       "German automobile, motorcycle and engine manufacturing company founded in 1917. "                     });                 //Add Third Customer to Our Collection             customers.Add(new Models.Customer()             {                 Id = 3,                 CompanyName = "Audi",                 ContactNo = "983-2222-1212",                 ContactPerson = "Karl Benz",                 Description = " is a multinational division of the German manufacturer Daimler AG,"             });               return View(customers);         }     } } t. Let us now create a view for this Class, But before continuing Press Ctrl + Shift + B to rebuild the solution, this will make the previously created model on the Model class drop down of the Add View Menu. Right click on the views>Home folder and select Add>View u. Let us name our View as ListCustomers, Select Razor(CSHTML) as View Engine, Put a check mark on Create a strongly-typed view, and select Customer (MyWebSite.Models) as model class. Slect List on the Scaffold Template and Click OK. v. Run the MVC Application by pressing F5, and on the address bar insert Home/ListCustomers, We should now see a web page similar below.   x. You can edit ListCustomers.CSHTML to remove and add HTML codes @model IEnumerable<MyWebSite.Models.Customer>   @{     Layout = null; }   <!DOCTYPE html>   <html> <head>     <meta name="viewport" content="width=device-width" />     <title>ListCustomers</title> </head> <body>     <h2>List of Customers</h2>     <table border="1">         <tr>             <th>                 @Html.DisplayNameFor(model => model.CompanyName)             </th>             <th>                 @Html.DisplayNameFor(model => model.Description)             </th>             <th>                 @Html.DisplayNameFor(model => model.ContactPerson)             </th>             <th>                 @Html.DisplayNameFor(model => model.ContactNo)             </th>         </tr>         @foreach (var item in Model) {         <tr>             <td>                 @Html.DisplayFor(modelItem => item.CompanyName)             </td>             <td>                 @Html.DisplayFor(modelItem => item.Description)             </td>             <td>                 @Html.DisplayFor(modelItem => item.ContactPerson)             </td>             <td>                 @Html.DisplayFor(modelItem => item.ContactNo)             </td>                   </tr>     }         </table> </body> </html> y. Press F5 to run the MVC Application   z. You will notice some @HTML.DisplayFor codes. These are called HTML Helpers you can read more about HTML Helpers on this site http://www.w3schools.com/aspnet/mvc_htmlhelpers.asp   That’s all. You now have your first MVC4 Razor Engine Web Application . . .

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • How can I generate XML application configuration using Zend_Tool?

    - by wimvds
    When creating a new project using zf create project myproject it will create a default project layout with an application.ini in the configs folder. Where can I change these default settings so that it generates (and uses) an XML file (application.xml)? I've looked at the documentation for Zend_Tool (http://framework.zend.com/manual/en/zend.tool.html), but there seems to be no information on how to do this. And suppose you'd like to use a different default folder layout (ie. use htdocs instead of public as your document root), is there a way to specify this as well? Any pointers to relevant information (btw I've looked at the Quickstart, nothing relevant is mentioned there unless I'm overlooking it)? edit I already tried creating a profile (stored in .zf/project/profiles), and used that to create a project (using zf create project myproject myprofile) but that doesn't change anything, even though the .zfproject.xml file in the root of the new project does contain the <applicationConfigFile type="xml"/> setting... The new project contains this (as you can see, it's just the default settings, only the type of applicationConfigFile has been changed) : <?xml version="1.0"?> <projectProfile type="default" version="1.10"> <projectDirectory> <projectProfileFile filesystemName=".zfproject.xml"/> <applicationDirectory classNamePrefix="Application_"> <apisDirectory enabled="false"/> <configsDirectory> <applicationConfigFile type="xml"/> </configsDirectory> <controllersDirectory> <controllerFile controllerName="Index"> <actionMethod actionName="index"/> </controllerFile> <controllerFile controllerName="Error"/> </controllersDirectory> <formsDirectory enabled="false"/> <layoutsDirectory enabled="false"/> <modelsDirectory/> <modulesDirectory enabled="false"/> <viewsDirectory> <viewScriptsDirectory> <viewControllerScriptsDirectory forControllerName="Index"> <viewScriptFile forActionName="index"/> </viewControllerScriptsDirectory> <viewControllerScriptsDirectory forControllerName="Error"> <viewScriptFile forActionName="error"/> </viewControllerScriptsDirectory> </viewScriptsDirectory> <viewHelpersDirectory/> <viewFiltersDirectory enabled="false"/> </viewsDirectory> <bootstrapFile filesystemName="Bootstrap.php"/> </applicationDirectory> <dataDirectory enabled="false"> <cacheDirectory enabled="false"/> <searchIndexesDirectory enabled="false"/> <localesDirectory enabled="false"/> <logsDirectory enabled="false"/> <sessionsDirectory enabled="false"/> <uploadsDirectory enabled="false"/> </dataDirectory> <docsDirectory> <file filesystemName="README.txt"/> </docsDirectory> <libraryDirectory> <zfStandardLibraryDirectory enabled="false"/> </libraryDirectory> <publicDirectory> <publicStylesheetsDirectory enabled="false"/> <publicScriptsDirectory enabled="false"/> <publicImagesDirectory enabled="false"/> <publicIndexFile filesystemName="index.php"/> <htaccessFile filesystemName=".htaccess"/> </publicDirectory> <projectProvidersDirectory enabled="false"/> <temporaryDirectory enabled="false"/> <testsDirectory> <testPHPUnitConfigFile filesystemName="phpunit.xml"/> <testApplicationDirectory> <testApplicationBootstrapFile filesystemName="bootstrap.php"/> </testApplicationDirectory> <testLibraryDirectory> <testLibraryBootstrapFile filesystemName="bootstrap.php"/> </testLibraryDirectory> </testsDirectory> </projectDirectory> </projectProfile>

    Read the article

  • How to trace a function array argument in DTrace

    - by uejio
    I still use dtrace just about every day in my job and found that I had to print an argument to a function which was an array of strings.  The array was variable length up to about 10 items.  I'm not sure if the is the right way to do it, but it seems to work and is not too painful if the array size is small.Here's an example.  Suppose in your application, you have the following function, where n is number of item in the array s.void arraytest(int n, char **s){    /* Loop thru s[0] to s[n-1] */}How do you use DTrace to print out the values of s[i] or of s[0] to s[n-1]?  DTrace does not have if-then blocks or for loops, so you can't do something like:    for i=0; i<arg0; i++        trace arg1[i]; It turns out that you can use probe ordering as a kind of iterator. Probes with the same name will fire in the order that they appear in the script, so I can save the value of "n" in the first probe and then use it as part of the predicate of the next probe to determine if the other probe should fire or not.  So the first probe for tracing the arraytest function is:pid$target::arraytest:entry{    self->n = arg0;}Then, if I want to print out the first few items of the array, I first check the value of n.  If it's greater than the index that I want to print out, then I can print that index.  For example, if I want to print out the 3rd element of the array, I would do something like:pid$target::arraytest:entry/self->n > 2/{    printf("%s",stringof(arg1 + 2 * sizeof(pointer)));}Actually, that doesn't quite work because arg1 is a pointer to an array of pointers and needs to be copied twice from the user process space to the kernel space (which is where dtrace is). Also, the sizeof(char *) is 8, but for some reason, I have to use 4 which is the sizeof(uint32_t). (I still don't know how that works.)  So, the script that prints the 3rd element of the array should look like:pid$target::arraytest:entry{    /* first, save the size of the array so that we don't get            invalid address errors when indexing arg1+n. */    self->n = arg0;}pid$target::arraytest:entry/self->n > 2/{    /* print the 3rd element (index = 2) of the second arg. */    i = 2;    size = 4;    self->a_t = copyin(arg1+size*i,size);    printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}If your array is large, then it's quite painful since you have to write one probe for every array index.  For example, here's the full script for printing the first 5 elements of the array:#!/usr/sbin/dtrace -spid$target::arraytest:entry{        /* first, save the size of the array so that we don't get           invalid address errors when indexing arg1+n. */        self->n = arg0;}pid$target::arraytest:entry/self->n > 0/{        i = 0;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 1/{        i = 1;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 2/{        i = 2;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 3/{        i = 3;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 4/{        i = 4;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));} If the array is large, then your script will also have to be very long to print out all values of the array.

    Read the article

  • Asp.NET ReportViewer “report execution has expired or cannot be found” error when using session state service or SQL Server session state

    - by dotneteer
    We encountered an error like: ReportServerException: The report execution x5pl2245iwvvq055khsxzlj5 has expired or cannot be found. (rsExecutionNotFound)]    Microsoft.Reporting.WebForms.ServerReportSoapProxy.OnSoapException(SoapException e) +72    Microsoft.Reporting.WebForms.Internal.Soap.ReportingServices2005.Execution.ProxyMethodInvocation.Execute(RSExecutionConnection connection, ProxyMethod`1 initialMethod, ProxyMethod`1 retryMethod) +428    Microsoft.Reporting.WebForms.Internal.Soap.ReportingServices2005.Execution.RSExecutionConnection.GetExecutionInfo() +133    Microsoft.Reporting.WebForms.ServerReport.EnsureExecutionSession() +197    Microsoft.Reporting.WebForms.ServerReport.LoadViewState(Object viewStateObj) +256    Microsoft.Reporting.WebForms.ServerReport..ctor(SerializationInfo info, StreamingContext context) +355 [TargetInvocationException: Exception has been thrown by the target of an invocation.]    System.RuntimeMethodHandle._SerializationInvoke(Object target, SignatureStruct&amp; declaringTypeSig, SerializationInfo info, StreamingContext context) +0    System.Reflection.RuntimeConstructorInfo.SerializationInvoke(Object target, SerializationInfo info, StreamingContext context) +108    System.Runtime.Serialization.ObjectManager.CompleteISerializableObject(Object obj, SerializationInfo info, StreamingContext context) +273    System.Runtime.Serialization.ObjectManager.FixupSpecialObject(ObjectHolder holder) +49    System.Runtime.Serialization.ObjectManager.DoFixups() +223    System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(HeaderHandler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) +188    System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) +203    System.Web.Util.AltSerialization.ReadValueFromStream(BinaryReader reader) +788    System.Web.SessionState.SessionStateItemCollection.ReadValueFromStreamWithAssert() +55    System.Web.SessionState.SessionStateItemCollection.DeserializeItem(String name, Boolean check) +281    System.Web.SessionState.SessionStateItemCollection.DeserializeItem(Int32 index) +110    System.Web.SessionState.SessionStateItemCollection.get_Item(Int32 index) +17    System.Web.SessionState.HttpSessionStateContainer.get_Item(Int32 index) +13    System.Web.Util.AspCompatApplicationStep.OnPageStartSessionObjects() +71    System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2065 This error occurs long after the report viewer page has closed. It occurs to any asp.net page in the application, rendering the entire application unusable until the user gets a new session. The cause of the problem is that the ReportViewer uses session state. When a page retrieves session from any out-of-state session, the session variable of type Microsoft.Reporting.WebForms.ReportHierarchy is deserialized from the session storage. The deserialization could cause the object to connect to the report server when the report is no longer available. The solution is simple but not pretty. We need to clean up the session variable when the report viewer page is closed. One way is to add some Javascript to the page to handle the window.onunload event. In the event handler, call a web service to clean up the session variable. The name of the session variable appears to be randomly generated. So we need to loop through the session variable to find a variable of the type Microsoft.Reporting.WebForms.ReportHierarchy. Microsoft has implemented pinging between the report viewer and the report server to keep the report alive on the server when the report viewer is up; I hope they will go one step further to take care of this problem.

    Read the article

  • SQL SERVER – Data Pages in Buffer Pool – Data Stored in Memory Cache

    - by pinaldave
    This will drop all the clean buffers so we will be able to start again from there. Now, run the following script and check the execution plan of the query. Have you ever wondered what types of data are there in your cache? During SQL Server Trainings, I am usually asked if there is any way one can know how much data in a table is stored in the memory cache? The more detailed question I usually get is if there are multiple indexes on table (and used in a query), were the data of the single table stored multiple times in the memory cache or only for a single time? Here is a query you can run to figure out what kind of data is stored in the cache. USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.type = 1 OR au.type = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.type = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Now let us run the query above and observe the output of the same. We can see in the above query that there are four columns. Cached_Pages_Count lists the pages cached in the memory. BaseTableName lists the original base table from which data pages are cached. IndexName lists the name of the index from which pages are cached. IndexTypeDesc lists the type of index. Now, let us do one more experience here. Please note that you should not run this test on a production server as it can extremely reduce the performance of the database. DBCC DROPCLEANBUFFERS This will drop all the clean buffers and we will be able to start again from there. Now run following script and check the execution plan for the same. USE AdventureWorks GO SELECT UnitPrice, ModifiedDate FROM Sales.SalesOrderDetail WHERE SalesOrderDetailID BETWEEN 1 AND 100 GO The execution plans contain the usage of two different indexes. Now, let us run the script that checks the pages cached in SQL Server. It will give us the following output. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. Let me know what you think of this article. I had a great pleasure while writing this article because I was able to write on this subject, which I like the most. In the next article, we will exactly see what data are cached and those that are not cached, using a few undocumented commands. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • Drupal Modules for SEO & Content

    - by Aditi
    When we talk about Drupal SEO, there are two things to consider one is about the relevant SEO practices and about appropriate Drupal Modules available. Optimizing your website for search engines is one of the most important aspect of launching & promoting your website especially if ranking matters to you. Understanding SEO For starters, you have begin with Keyword research and then optimize your content according to your findings by tagging, meta tags etc, Drupal modules once installed help you manage a lot of such parameters. Identifying the target keywords Using the Page Title and Token modules PathAuto configuration <H1> heading tags Optimizing Drupal’s default robots.txt file Etc. While Drupal gives you a lot of ability to make your website content worthy & search engine friendly it is important for you to make sure you are not crossing the line or you could get penalized. Modules Overview Drupal Power is at its best when you have these modules & great brain working together. The basic SEO improvements can be achieved easily with the modules enlisted below, but you can win magical rankings if you use them logically & wisely. Understanding your keyword competition & enhancing your content is the basic key to success and ofcourse the modules: Pathauto Automatically create search enging friendly readable URLS from tokens. A token is a piece of data from content, say the author’s username, or the content’s title. For example mysite.com/an-article, rather than mysite.com/node/114 for every node you make. NodeWords Amazingly useful drupal module that allows you to create custom meta tags and descriptions for your nodes, which gives you the ability to target specific keywords and phrases. Page Title Enables you to set an alternative title for the <title></title> tags and for the <h1></h1> tags on a node. Global Redirect Manage content duplication, 301 redirects, and URL validation with this small, but powerful module. Taxonomy manager Make large additions, or changes to taxonomy very easy. This module provides a powerful interface for managing taxonomies. A vocabulary gets displayed in a dynamic tree view, where parent terms can be expanded to list their nested child terms or can be collapsed. robotstxt A robots.txt file is vital for ensuring that search engine spiders don’t index the unwanted areas of your site. This Drupal module gives you the ability to manage your robots.txt file through the CMS admin. xmlsitemap An XML Sitemap lets the search engines index your website content. This module helps in generating and maintaining a complete sitemap for your website and gives you control over exactly which parts of the site you want to be included in the index. It even gives you the ability to automatically submit your sitemap to Google, Yahoo!, Ask.com and Windows Live every time you update a node or at specific interval. Node Import This module allows you to import a set of nodes from a Comma Seperated Values (CSV) or Tab Seperated Values (TSV) text file. Makes it easy to import hundreds-thousands of csv rows and you get to tie up these rows to CCK fields (or locations), and it can file it under the right taxonomy hierarchy. This is Super life saver module.

    Read the article

  • Hello, T4MVC &ndash; Goodbye, ASP.NET MVC &ldquo;magic strings&rdquo;

    - by Brian Schroer
    I’m working on my first ASP.NET MVC project, and I really, really like MVC. I hate all of the “magic strings”, though: <div id="logindisplay"> <% Html.RenderPartial("LogOnUserControl"); %> </div> <div id="menucontainer"> <ul id="menu"> <li><%=Html.ActionLink("Find Dinner", "Index", "Dinners")%></li> <li><%=Html.ActionLink("Host Dinner", "Create", "Dinners")%></li> <li><%=Html.ActionLink("About", "About", "Home")%></li> </ul> </div> They’re prone to misspelling (causing errors that won’t be caught until runtime), there’s duplication, there’s no Intellisense, and they’re not friendly to refactoring tools.   I had started down the path of creating static classes with constants for the strings, e.g.: <li><%=Html.ActionLink("Find Dinner", DinnerControllerActions.Index, Controllers.Dinner)%></li> …but that was pretty tedious.   Then I discovered T4MVC (http://mvccontrib.codeplex.com/wikipage?title=T4MVC). Just add its T4MVC.tt and T4MVC.settings.t4 files to the root of your MVC application, and it magically (and this time, it’s good magic) generates code that allows you to replace the first code sample above with this: <div id="logindisplay"> <% Html.RenderPartial(MVC.Shared.Views.LogOnUserControl); %> </div> <div id="menucontainer"> <ul id="menu"> <li><%=Html.ActionLink("Find Dinner", MVC.Dinners.Index())%></li> <li><%=Html.ActionLink("Host Dinner", MVC.Dinners.Create())%></li> <li><%=Html.ActionLink("About", MVC.Home.About())%></li> </ul> </div> It gives you a strongly-typed alternative to magic strings for all of these scenarios: Html.Action Html.ActionLink Html.RenderAction Html.RenderPartial Html.BeginForm Url.Action Ajax.ActionLink view names inside controllers But wait, there’s more! It even gives you static helpers for image and script links, e.g.: <img src="<%= Links.Content.nerd_jpg %>" />   <script src="<%= Links.Scripts.Map_js %>" type="text/javascript"></script> …instead of: <img src="/Content/nerd.jpg" />   <script src="/Scripts/Map.js" type="text/javascript"></script>   Thanks to David Ebbo for creating this great tool. You can watch an eight and a half minute video about T4MVC on Channel 9 via this link: http://channel9.msdn.com/posts/jongalloway/Jon-Takes-Five-with-David-Ebbo-on-T4MVC/. You can download T4MVC from its CodePlex page: http://mvccontrib.codeplex.com/wikipage?title=T4MVC.

    Read the article

  • What's New in Oracle's EPM System?

    - by jmorourke
    Oracle’s EPM System R11.1.2.2  is now generally available to customers and partners on the download center.  Although the release number doesn’t sound significant, this is a major release of Oracle’s Hyperion EPM Suite with new modules as well as significant enhancements across the suite.  This release was announced back on April 4th as part of Oracle’s Business Analytics Strategy launch, so analytics is a key aspect of the release.  But the three biggest pieces of news in this release are Oracle Hyperion Planning support for the Exalytics In-Memory Machine, the new Project Financial Planning Application and the new Account Reconciliations Manager module. The Oracle Exalytics In-Memory Machine was announced back in October 2011, at Oracle OpenWorld.  It’s the latest installment from Oracle in a line of engineered systems that combine Oracle Sun hardware, with Oracle database and application technologies – in solutions that are designed to provide high scalability and performance for specific tasks.  Exalytics is the first engineered system specifically designed for high performance analytics.  Running in-memory versions of Oracle Essbase, as well as the Oracle TimesTen database and Oracle BI tools, Exalytics provides speed of thought response times for complex analytic processes with advanced visualizations.  Early adopter customers have achieved 5X to 100X faster interactivity and 6X to 10X faster planning cycles.  Hyperion Planning running with Oracle Exalytics will support enterprise-wide planning, budgeting and forecasting with more detailed data, with hundreds to thousands of users across an organization getting speed of thought performance. The new Hyperion Project Financial Planning application delivered with EPM 11.1.2.2 is also great news for Oracle customers.  This application follows on the heels of other special-purpose planning applications that Oracle has delivered for Workforce and Capital Asset planning.  It allows Project Managers to identify project-related expenses and revenues, plan and propose new projects, and track results over time. Finance Managers can evaluate and compare different projects, manage the funding process, monitor and report the actual financial results and impacts of projects and project portfolios. This new application is applicable to capital projects, contract projects and indirect projects like IT and HR projects across all industries.  This application is a great complement to existing Project Management applications, and helps bridge the gap between these applications, and the financial planning and budgeting process. Account reconciliations has to be one of the biggest bottlenecks and risks in the financial close and reporting process, and many organizations rely on spreadsheets and manual processes to perform this critical process.  To help address this problem, Oracle developed an Account Reconciliation Manager module that is being delivered as part of Oracle Hyperion Financial Close Management.   This module helps automate and streamline account reconciliations and eliminates the chances for errors, omissions and fraud.  But unlike standalone account reconciliation packages, it’s integrated with the rest of the Oracle Hyperion Financial Close suite, and can integrate balances from any source system.  This can help alleviate a major bottleneck in the financial close process, increase accuracy and reduce risk, and can complement existing investments in Hyperion Financial Management, as well as Oracle and non-Oracle transaction processing systems. Other enhancements in this release include an enhanced Web 2.0 interface for Hyperion Planning and Hyperion Financial Management (HFM), configurable dimensionality in HFM, new Predictive Planning feature in Hyperion Planning, new Detailed Profitability feature in Hyperion Profitability and Cost Management, new Smart View interface for Hyperion Strategic Finance, and integration of the Hyperion applications with JD Edwards Financials. For more information about Oracle EPM System R11.1.2.2 check out the links below: Press Release:  http://www.oracle.com/us/corporate/press/1575775 Product Information on O.com:  http://www.oracle.com/us/solutions/business-analytics/overview/index.html Product Information on OTN:  http://www.oracle.com/technetwork/middleware/epm/downloads/index.html Webcast Replay:  http://www.oracle.com/us/go/index.html?Src=7317510&Act=65&pcode=WWMK11054701MPP046 Please contact me if you have any questions or need additional information – [email protected]

    Read the article

  • Improving the running time of Breadth First Search and Adjacency List creation

    - by user45957
    We are given an array of integers where all elements are between 0-9. have to start from the 1st position and reach end in minimum no of moves such that we can from an index i move 1 position back and forward i.e i-1 and i+1 and jump to any index having the same value as index i. Time Limit : 1 second Max input size : 100000 I have tried to solve this problem use a single source shortest path approach using Breadth First Search and though BFS itself is O(V+E) and runs in time the adjacency list creation takes O(n2) time and therefore overall complexity becomes O(n2). is there any way i can decrease the time complexity of adjacency list creation? or is there a better and more efficient way of solving the problem? int main(){ vector<int> v; string str; vector<int> sets[10]; cin>>str; int in; for(int i=0;i<str.length();i++){ in=str[i]-'0'; v.push_back(in); sets[in].push_back(i); } int n=v.size(); if(n==1){ cout<<"0\n"; return 0; } if(v[0]==v[n-1]){ cout<<"1\n"; return 0; } vector<int> adj[100001]; for(int i=0;i<10;i++){ for(int j=0;j<sets[i].size();j++){ if(sets[i][j]>0) adj[sets[i][j]].push_back(sets[i][j]-1); if(sets[i][j]<n-1) adj[sets[i][j]].push_back(sets[i][j]+1); for(int k=j+1;k<sets[i].size();k++){ if(abs(sets[i][j]-sets[i][k])!=1){ adj[sets[i][j]].push_back(sets[i][k]); adj[sets[i][k]].push_back(sets[i][j]); } } } } queue<int> q; q.push(0); int dist[100001]; bool visited[100001]={false}; dist[0]=0; visited[0]=true; int c=0; while(!q.empty()){ int dq=q.front(); q.pop(); c++; for(int i=0;i<adj[dq].size();i++){ if(visited[adj[dq][i]]==false){ dist[adj[dq][i]]=dist[dq]+1; visited[adj[dq][i]]=true; q.push(adj[dq][i]); } } } cout<<dist[n-1]<<"\n"; return 0; }

    Read the article

  • GoldenGate 12c - MySQL Active-Active Replication Setup

    - by Jinyu Wang-Oracle
    Active-active  (also called Master-Master or Bi-Directional) replication captures data changes from two or more systems and replicat the changes to synchronize the data.  Active-Active replication is often needed for high availability, load balancing and scaling out purposes.   Oracle GoldenGate is known to be one of the first and the best replication tool handling active-active replications. As of Oracle GoldenGate 12c, it provides (Refer to Oracle GoldenGate 12.1.2 Documentation - Configuring Oracle GoldenGate for Active-Active High Availability for more information) the followings: Robust loop-back prevention Comprehensive conflict resolution and detection support Heterogeneous support across different database versions and operation systems.  Oracle GoldenGate supports active-active configurations for DB2 on z/OS, LUW, and IBM i, MySQL, Oracle, SQL/MX,SQL Server, Sybase, and Teradata. However, the setup is different from database to database. In this example, I will show you how to setup an active-active data replication between two MySQL database instances. The example setup below is to have active-active replication between MySQL 5.5 and MySQL 5.6 instances and is shown as follows: MySQL 5.5 (Manager Port: 15105)  Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3305 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, PASSWORD mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl REPORTROLLOVER AT 05:30 ON saturday TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15106, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3305 targetdb test, userid root, password mysql sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge REPERROR (DEFAULT, ABEND) REPERROR(1062, IGNORE) map test.TCUSTMER, target test.TCUSTMER,colmap(usedefaults, region_code="region code"); map test.TCUSTORD, target test.TCUSTORD; MySQL 5.6 (Manager Port: 15106) Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3306 targetdb test, userid root, password mysql --assumetargetdefs sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge map test.TCUSTMER, target test.TCUSTMER, colmap(usedefaults, "region code"=region_code); map test.TCUSTORD, target test.TCUSTORD; Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3306 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, USERID mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/usr/local/mysql56/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15105, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; The setup parameters are quite self-explanatory. The key setup is to avoid the replication data  looping. Oracle GoldenGate for MySQL uses the information in the replication checkpoint table to identify the transaction applied by replicats and thus avoid extracting those transactions by Oracle GoldenGate extracts. The example setup in the extract in MySQL 5.5 instance is shown as follows.  TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl Setting up an active-active replication is often more complicated than this and requires the following additional considerations. I would elaborate on this in the follow-up discussions. 

    Read the article

  • Writing an unthemed view while still using Orchard shapes and helpers

    - by Bertrand Le Roy
    This quick tip will show how you can write a custom view for a custom controller action in Orchard that does not use the current theme, but that still retains the ability to use shapes, as well as zones, Script and Style helpers. The controller action, first, needs to opt out of theming: [Themed(false)] public ActionResult Index() {} .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Then, we still want to use a shape as the view model, because Clay is so awesome: private readonly dynamic _shapeFactory; public MyController(IShapeFactory shapeFactory) { _shapeFactory = shapeFactory; } [Themed(false)] public ActionResult Index() { return View(_shapeFactory.MyShapeName( Foo: 42, Bar: "baz" )); } As you can see, we injected a shape factory, and that enables us to build our shape from our action and inject that into the view as the model. Finally, in the view (that would in Views/MyController/Index.cshtml here), just use helpers as usual. The only gotcha is that you need to use “Layout” in order to declare zones, and that two of those zones, Head and Tail, are mandatory for the top and bottom scripts and stylesheets to be injected properly. Names are important here. @{ Style.Include("somestylesheet.css"); Script.Require("jQuery"); Script.Include("somescript.js"); using(Script.Foot()) { <script type="text/javascript"> $(function () { // Do stuff }) </script> } } <!DOCTYPE html> <html> <head> <title>My unthemed page</title> @Display(Layout.Head) </head> <body> <h1>My unthemed page</h1> <div>@Model.Foo is the answer.</div> </body> @Display(Layout.Tail) </html> Note that if you define your own zones using @Display(Layout.SomeZone) in your view, you can perfectly well send additional shapes to them from your controller action, if you injected an instance of IWorkContextAccessor: _workContextAccessor.GetContext().Layout .SomeZone.Add(_shapeFactory.SomeOtherShape()); Of course, you’ll need to write a SomeOtherShape.cshtml template for that shape but I think this is pretty neat.

    Read the article

  • Octree subdivision problem

    - by ChaosDev
    Im creating octree manually and want function for effectively divide all nodes and their subnodes - For example - I press button and subnodes divided - press again - all subnodes divided again. Must be like - 1 - 8 - 64. The problem is - i dont understand how organize recursive loops for that. OctreeNode in my unoptimized implementation contain pointers to subnodes(childs),parent,extra vector(contains dublicates of child),generation info and lots of information for drawing. class gOctreeNode { //necessary fields gOctreeNode* FrontBottomLeftNode; gOctreeNode* FrontBottomRightNode; gOctreeNode* FrontTopLeftNode; gOctreeNode* FrontTopRightNode; gOctreeNode* BackBottomLeftNode; gOctreeNode* BackBottomRightNode; gOctreeNode* BackTopLeftNode; gOctreeNode* BackTopRightNode; gOctreeNode* mParentNode; std::vector<gOctreeNode*> m_ChildsVector; UINT mGeneration; bool mSplitted; bool isSplitted(){return m_Splitted;} .... //unnecessary fields }; DivideNode of Octree class fill these fields, set mSplitted to true, and prepare for correctly drawing. Octree contains basic nodes(m_nodes). Basic node can be divided, but now I want recursivly divide already divided basic node with 8 subnodes. So I write this function. void DivideAllChildCells(int ix,int ih,int id) { std::vector<gOctreeNode*> nlist; std::vector<gOctreeNode*> dlist; int index = (ix * m_Height * m_Depth) + (ih * m_Depth) + (id * 1);//get index of specified node gOctreeNode* baseNode = m_nodes[index].get(); nlist.push_back(baseNode->FrontTopLeftNode); nlist.push_back(baseNode->FrontTopRightNode); nlist.push_back(baseNode->FrontBottomLeftNode); nlist.push_back(baseNode->FrontBottomRightNode); nlist.push_back(baseNode->BackBottomLeftNode); nlist.push_back(baseNode->BackBottomRightNode); nlist.push_back(baseNode->BackTopLeftNode); nlist.push_back(baseNode->BackTopRightNode); bool cont = true; UINT d = 0;//additional recursive loop param (?) UINT g = 0;//additional recursive loop param (?) LoopNodes(d,g,nlist,dlist); //Divide resulting nodes for(UINT i = 0; i < dlist.size(); i++) { DivideNode(dlist[i]); } } And now, back to the main question,I present LoopNodes, which must do all work for giving dlist nodes for splitting. void LoopNodes(UINT& od,UINT& og,std::vector<gOctreeNode*>& nlist,std::vector<gOctreeNode*>& dnodes) { //od++;//recursion depth bool f = false; //pass through childs for(UINT i = 0; i < 8; i++) { if(nlist[i]->isSplitted())//if node splitted and have childs { //pass forward through tree for(UINT j = 0; j < 8; j++) { nlist[j] = nlist[j]->m_ChildsVector[j];//set pointers to these childs } LoopNodes(od,og,nlist,dnodes); } else //if no childs { //add to split vector dnodes.push_back(nlist[i]); } } } This version of loop nodes works correctly for 2(or 1?) generations after - this will not divide neightbours nodes, only some corners. I need correct algorithm. Screenshot All I need - is correct version of LoopNodes, which can add all nodes for DivideNode.

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • SSI: Failed String Comparison with CGI Environment Variable [migrated]

    - by Calyo Delphi
    I am currently working on developing a personal website. It's not my first time doing this, but this is my first major foray into implementing SSI. I've run myself into a wall, however, with an if-else directive that uses one of the CGI environment variables as part of its comparison. Even after some limited attempts at debugging, all of the output and documentation that I have means that the comparisons being made should fail outright. This is not the case, and the wrong evaluation is being made by the if-else directive. Here's the code in the file index.shtml: <head> <!--#set var="page" value="Home" --> <!--#include file="headlinks.shtml" --> <style> img#ref { float: right; margin-left: 8px; border-width: 0px; } </style> </head> Here's the code in the file headlinks.shtml: <title><!--#echo var="page" --> &ndash; <!--#echo var="HTTP_HOST" --></title> <!--#set var="docroot" value="${DOCUMENT_ROOT}" --> <!--#echo var="docroot" --> <!--#if expr="( $docroot != '/Applications/MAMP/htdocs' ) || ( $docroot != '/home/dragarch/public_html' )" --> <link rel="stylesheet" type="text/css" href="../style.css"> <link rel="shortcut icon" type="image/svg+xml" href="../favicon.svg" /> <!--#else --> <link rel="stylesheet" type="text/css" href="style.css"> <link rel="shortcut icon" type="image/svg+xml" href="favicon.svg" /> <!--#endif --> And here's the output for the file index.shtml: <title>Home &ndash; dragarch</title> /Applications/MAMP/htdocs <link rel="stylesheet" type="text/css" href="../style.css"> <link rel="shortcut icon" type="image/svg+xml" href="../favicon.svg" /> Both style.css and favicon.svg are in the document root with index.shtml, so the if directive should fail and default to the output of the else directive. As you can see, while the document root (which is currently the MAMP htdocs folder on my own notebook) is correct according to the output of the echo directive, the comparison in the if-else directive fails to compare the strings properly. I'm using this page for my documentation: http://httpd.apache.org/docs/2.2/mod/mod_include.html I'm at a complete loss as to why this is the case, and need a bit of help here. EDIT: I should note that dragarch is a hostname that I configured in /etc/hosts to point to 127.0.0.1 so I could test the site without having to use localhost. It has no real effect on the functionality of anything, other than to just act as a prettier hostname to use.

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • jqGrid (Delete row) - How to send additional POST data???

    - by ronanray
    Hi experts, I'm having problem with jqGrid delete mechanism as it only send "oper" and "id" parameters in form of POST data (id is the primary key of the table). The problem is, I need to delete a row based on the id and another column value, let's say user_id. How to add this user_id to the POST data??? I can summarize the issue as the following: How to get the cell value (user_id) of the selected row? AND, how to add that user_id to the POST data so it can be retrieved from the code behind where the actual delete process takes place. Sample codes: jQuery("#tags").jqGrid({ url: "subgrid.process.php, editurl: "subgrid.process.php?, datatype: "json", mtype: "POST", colNames:['id','user_id','status_type_id'], colModel:[{name:'id', index:'id', width:100, editable:true}, {name:'user_id', index:'user_id', width:200, editable:true}, {name:'status_type_id', index:'status_type_id', width:200} ], pager: '#pagernav2', rowNum:10, rowList:[10,20,30,40,50,100], sortname: 'id', sortorder: "asc", caption: "Test", height: 200 }); jQuery("#tags").jqGrid('navGrid','#pagernav2', {add:true,edit:false,del:true,search:false}, {}, {mtype:"POST",closeAfterAdd:true,reloadAfterSubmit:true}, // add options {mtype:"POST",reloadAfterSubmit:true}, // del options {} // search options ); Help....

    Read the article

  • apache htaccess rewrite rules make redirection loop

    - by Ali
    Hi guys, Have a very strange problem with Apache .htaccess URL Rewriting and Redirection. Here's my setup: I have a zend application with a single point of entry (index.php) directly under my apache document root (call this the "public" folder). I also have all other public files (images, js, css, etc.) under the public folder. Here, I also have a wordpress blog under the "blog" folder. There's an empty test folder too The Problem When I go to mydomain.com/blog, I get redirected to http://www.theredpin.com/blog (correctly), then to http://www.theredpin.com/blog/ (just with an extra / at end), finally to http://theredpin.com/blog/ -- and we're back where we started. The loop continues. What I don't understand is why would wordpress try to remove the www? I'm guessing it's wordpress because my empty test folder acts just fine! PLEASE HELP. I"M REALLY DESPERATE :( Thank you very much Other things that happen: When I go to mydomain.com, i correctly get redirected to www.mydomain.com When I go to www.mydomain.com, i correctly stay where I am When I go to www.mydomain.com/test or mydomain.com/test, behaviour is correct. Setup So my .htaccess file does the following: If there's no www., then add it and do a 301 redirect. Here's the code I use RewriteCond %{HTTP_HOST} ^mydomain.com [NC] RewriteRule ^(.*)$ http://www.mydomain.com/$1 [L,R=301] If the request is NOT for a resource (image, etc.), or the blog, then load zend application by rewriting to index.php RewriteRule !((^blog(/)?.*$)|(.(js|ico|gif|jpg|jpeg|png|css|cur|JPG|html|txt))$) index.php Thanks again for all your help!!! Ali

    Read the article

  • iPhone how to enable or disable UITabBar

    - by Dean Wagstaff
    I have a simple app with a tab bar which based upon user input disables one or more of the bar items. I understand I need to use a UITabBarDelegate which I have tried to use. However when I call the delegate method I get an uncaught exception error [NSObject doesNotRecognizeSelector]. I am not sure I am doing this all right or that I haven't missed something. Any suggestions. What I have now is the following: WMViewController.h import define kHundreds 0 @interface WMViewController : UIViewController { } @end WMViewController.m import "WMViewController.h" import "MLDTabBarControllerAppDelegate.h" @implementation WMViewController (IBAction)finishWizard{ MLDTabBarControllerAppDelegate *appDelegate = (MLDTabBarControllerAppDelegate *)[[UIApplication sharedApplication] delegate]; [appDelegate setAvailabilityTabIndex:0 Enable:TRUE]; } MLDTabBarControllerAppDelegate.h import @interface MLDTabBarControllerAppDelegate : NSObject { } (void) setAvailabilityTabIndex: (NSInteger) index Enable: (BOOL) enable; @end MLDTabBarControllerAppDelegate.m import "MLDTabBarControllerApplicationDelegate.h" import "MyListDietAppDelegate.h" @implementation MLDTabBarControllerAppDelegate (void) setAvailabilityTabIndex: (NSInteger) index Enable: (BOOL) enable { UITabBarController *controller = (UITabBarController *)[[[MyOrganizerAppDelegate getTabBarController] viewControllers ] objectAtIndex:index]; [[controller tabBarItem] setEnabled:enable]; } @end I get what appear to be a good controller object but crash on the [[controller tabBarItem]setEnabled:enable]; What am I missing... Any suggestions Thanks,

    Read the article

  • ASP.NET MVC 3: RouteExistingFiles = true seems to have no effect

    - by Oleksandr Kaplun
    I'm trying to understand how RouteExistingFiles works. So I've created a new MVC 3 internet project (MVC 4 behaves the same way) and put a HTMLPage.html file to the Content folder of my project. Now I went to the Global.Asax file and edited the RegisterRoutes function so it looks like this: public static void RegisterRoutes(RouteCollection routes) { routes.RouteExistingFiles = true; //Look for routes before looking if a static file exists routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new {controller = "Home", action = "Index", id = UrlParameter.Optional} // Parameter defaults ); } Now it should give me an error when I'm requesting a localhost:XXXX/Content/HTMLPage.html since there's no "Content" controller and the request definitely hits the default pattern. But instead I'm seeing my HTMLPage. What am I doing wrong here? Update: I think I'll have to give up. Even if I'm adding a route like this one: routes.MapRoute("", "Content/{*anything}", new {controller = "Home", action = "Index"}); it still shows me the content of the HTMLPage. When I request a url like ~/Content/HTMLPage I'm getting the Index page as expected, but when I add a file extenstion like .html or .txt the content is shown (or a 404 error if the file does not exist). If anyone can check this in VS2012 please let me know what result you're getting. Thank you.

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >