Search Results

Search found 88714 results on 3549 pages for 'data type'.

Page 542/3549 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • Get XML from Server for Use on Windows Phone

    - by psheriff
    When working with mobile devices you always need to take into account bandwidth usage and power consumption. If you are constantly connecting to a server to retrieve data for an input screen, then you might think about moving some of that data down to the phone and cache the data on the phone. An example would be a static list of US State Codes that you are asking the user to select from. Since this is data that does not change very often, this is one set of data that would be great to cache on the phone. Since the Windows Phone does not have an embedded database, you can just use an XML string stored in Isolated Storage. Of course, then you need to figure out how to get data down to the phone. You can either ship it with the application, or connect and retrieve the data from your server one time and thereafter cache it and retrieve it from the cache. In this blog post you will see how to create a WCF service to retrieve data from a Product table in a database and send that data as XML to the phone and store it in Isolated Storage. You will then read that data from Isolated Storage using LINQ to XML and display it in a ListBox. Step 1: Create a Windows Phone Application The first step is to create a Windows Phone application called WP_GetXmlFromDataSet (or whatever you want to call it). On the MainPage.xaml add the following XAML within the “ContentPanel” grid: <StackPanel>  <Button Name="btnGetXml"          Content="Get XML"          Click="btnGetXml_Click" />  <Button Name="btnRead"          Content="Read XML"          IsEnabled="False"          Click="btnRead_Click" />  <ListBox Name="lstData"            Height="430"            ItemsSource="{Binding}"            DisplayMemberPath="ProductName" /></StackPanel> Now it is time to create the WCF Service Application that you will call to get the XML from a table in a SQL Server database. Step 2: Create a WCF Service Application Add a new project to your solution called WP_GetXmlFromDataSet.Services. Delete the IService1.* and Service1.* files and the App_Data folder, as you don’t generally need these items. Add a new WCF Service class called ProductService. In the IProductService class modify the void DoWork() method with the following code: [OperationContract]string GetProductXml(); Open the code behind in the ProductService.svc and create the GetProductXml() method. This method (shown below) will connect up to a database and retrieve data from a Product table. public string GetProductXml(){  string ret = string.Empty;  string sql = string.Empty;  SqlDataAdapter da;  DataSet ds = new DataSet();   sql = "SELECT ProductId, ProductName,";  sql += " IntroductionDate, Price";  sql += " FROM Product";   da = new SqlDataAdapter(sql,    ConfigurationManager.ConnectionStrings["Sandbox"].ConnectionString);   da.Fill(ds);   // Create Attribute based XML  foreach (DataColumn col in ds.Tables[0].Columns)  {    col.ColumnMapping = MappingType.Attribute;  }   ds.DataSetName = "Products";  ds.Tables[0].TableName = "Product";  ret = ds.GetXml();   return ret;} After retrieving the data from the Product table using a DataSet, you will want to set each column’s ColumnMapping property to Attribute. Using attribute based XML will make the data transferred across the wire a little smaller. You then set the DataSetName property to the top-level element name you want to assign to the XML. You then set the TableName property on the DataTable to the name you want each element to be in your XML. The last thing you need to do is to call the GetXml() method on the DataSet object which will return an XML string of the data in your DataSet object. This is the value that you will return from the service call. The XML that is returned from the above call looks like the following: <Products>  <Product ProductId="1"           ProductName="PDSA .NET Productivity Framework"           IntroductionDate="9/3/2010"           Price="5000" />  <Product ProductId="3"           ProductName="Haystack Code Generator for .NET"           IntroductionDate="7/1/2010"           Price="599.00" />  ...  ...  ... </Products> The GetProductXml() method uses a connection string from the Web.Config file, so add a <connectionStrings> element to the Web.Config file in your WCF Service application. Modify the settings shown below as needed for your server and database name. <connectionStrings>  <add name="Sandbox"        connectionString="Server=Localhost;Database=Sandbox;                         Integrated Security=Yes"/></connectionStrings> The Product Table You will need a Product table that you can read data from. I used the following structure for my product table. Add any data you want to this table after you create it in your database. CREATE TABLE Product(  ProductId int PRIMARY KEY IDENTITY(1,1) NOT NULL,  ProductName varchar(50) NOT NULL,  IntroductionDate datetime NULL,  Price money NULL) Step 3: Connect to WCF Service from Windows Phone Application Back in your Windows Phone application you will now need to add a Service Reference to the WCF Service application you just created. Right-mouse click on the Windows Phone Project and choose Add Service Reference… from the context menu. Click on the Discover button. In the Namespace text box enter “ProductServiceRefrence”, then click the OK button. If you entered everything correctly, Visual Studio will generate some code that allows you to connect to your Product service. On the MainPage.xaml designer window double click on the Get XML button to generate the Click event procedure for this button. In the Click event procedure make a call to a GetXmlFromServer() method. This method will also need a “Completed” event procedure to be written since all communication with a WCF Service from Windows Phone must be asynchronous.  Write these two methods as follows: private const string KEY_NAME = "ProductData"; private void GetXmlFromServer(){  ProductServiceClient client = new ProductServiceClient();   client.GetProductXmlCompleted += new     EventHandler<GetProductXmlCompletedEventArgs>      (client_GetProductXmlCompleted);   client.GetProductXmlAsync();  client.CloseAsync();} void client_GetProductXmlCompleted(object sender,                                   GetProductXmlCompletedEventArgs e){  // Store XML data in Isolated Storage  IsolatedStorageSettings.ApplicationSettings[KEY_NAME] = e.Result;   btnRead.IsEnabled = true;} As you can see, this is a fairly standard call to a WCF Service. In the Completed event you get the Result from the event argument, which is the XML, and store it into Isolated Storage using the IsolatedStorageSettings.ApplicationSettings class. Notice the constant that I added to specify the name of the key. You will use this constant later to read the data from Isolated Storage. Step 4: Create a Product Class Even though you stored XML data into Isolated Storage when you read that data out you will want to convert each element in the XML file into an actual Product object. This means that you need to create a Product class in your Windows Phone application. Add a Product class to your project that looks like the code below: public class Product{  public string ProductName{ get; set; }  public int ProductId{ get; set; }  public DateTime IntroductionDate{ get; set; }  public decimal Price{ get; set; }} Step 5: Read Settings from Isolated Storage Now that you have the XML data stored in Isolated Storage, it is time to use it. Go back to the MainPage.xaml design view and double click on the Read XML button to generate the Click event procedure. From the Click event procedure call a method named ReadProductXml().Create this method as shown below: private void ReadProductXml(){  XElement xElem = null;   if (IsolatedStorageSettings.ApplicationSettings.Contains(KEY_NAME))  {    xElem = XElement.Parse(     IsolatedStorageSettings.ApplicationSettings[KEY_NAME].ToString());     // Create a list of Product objects    var products =         from prod in xElem.Descendants("Product")        orderby prod.Attribute("ProductName").Value        select new Product        {          ProductId = Convert.ToInt32(prod.Attribute("ProductId").Value),          ProductName = prod.Attribute("ProductName").Value,          IntroductionDate =             Convert.ToDateTime(prod.Attribute("IntroductionDate").Value),          Price = Convert.ToDecimal(prod.Attribute("Price").Value)        };     lstData.DataContext = products;  }} The ReadProductXml() method checks to make sure that the key name that you saved your XML as exists in Isolated Storage prior to trying to open it. If the key name exists, then you retrieve the value as a string. Use the XElement’s Parse method to convert the XML string to a XElement object. LINQ to XML is used to iterate over each element in the XElement object and create a new Product object from each attribute in your XML file. The LINQ to XML code also orders the XML data by the ProductName. After the LINQ to XML code runs you end up with an IEnumerable collection of Product objects in the variable named “products”. You assign this collection of product data to the DataContext of the ListBox you created in XAML. The DisplayMemberPath property of the ListBox is set to “ProductName” so it will now display the product name for each row in your products collection. Summary In this article you learned how to retrieve an XML string from a table in a database, return that string across a WCF Service and store it into Isolated Storage on your Windows Phone. You then used LINQ to XML to create a collection of Product objects from the data stored and display that data in a Windows Phone list box. This same technique can be used in Silverlight or WPF applications too. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "Get XML From Server for Use on Windows Phone" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

  • Swap not available on System Monitor

    - by Zaki
    I had a swap partition of 1GB (RAM 1GB, Ubuntu 12.04 lts). Now swap is not shown on System Monitor neither can I hibernate my pc (sudo pm-hibernate). blkid output: /dev/sda1: UUID="B8B4FBB1B4FB706C" TYPE="ntfs" /dev/sda2: UUID="2ea7d608-2d89-4e41-9436-d05cb3ce8871" TYPE="swap" /dev/sda3: UUID="3219d03a-67e4-454b-8ce7-a27831846e35" TYPE="ext4" /dev/sda5: LABEL="Softwares" UUID="AC1CC3301CC2F47C" TYPE="ntfs" /dev/sda6: LABEL="Education" UUID="1E103E6C103E4B53" TYPE="ntfs" /dev/sda7: LABEL="Recreation" UUID="2CC8D181C8D149AA" TYPE="ntfs" /dev/sda8: LABEL="Miscellaneous" UUID="0274D6B174D6A727" TYPE="ntfs" /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=3219d03a-67e4-454b-8ce7-a27831846e35 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=2ea7d608-2d89-4e41-9436-d05cb3ce8871 none swap sw 0 0 free -m total used free shared buffers cached Mem: 991 867 123 0 27 418 -/+ buffers/cache: 421 569 Swap: 0 0 0 cat /proc/swaps Filename Type Size Used Priority fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9f369f36 Device Boot Start End Blocks Id System /dev/sda1 * 63 31471334 15735636 7 HPFS/NTFS/exFAT /dev/sda2 31471616 33470447 999416 82 Linux swap / Solaris /dev/sda3 33472512 62539775 14533632 83 Linux /dev/sda4 62541045 312592769 125025862+ f W95 Ext'd (LBA) /dev/sda5 62541108 125066024 31262458+ 7 HPFS/NTFS/exFAT /dev/sda6 125066088 187591004 31262458+ 7 HPFS/NTFS/exFAT /dev/sda7 187591068 250115984 31262458+ 7 HPFS/NTFS/exFAT /dev/sda8 250116048 312576704 31230328+ 7 HPFS/NTFS/exFAT swapon --all swapon: /dev/sda2: swapon failed: Invalid argument dmesg | grep -A 5 -B 5 -i swap [ 9.487404] EXT4-fs (sda3): ext4_orphan_cleanup: deleting unreferenced inode 131645 [ 9.487413] EXT4-fs (sda3): ext4_orphan_cleanup: deleting unreferenced inode 131330 [ 9.487418] EXT4-fs (sda3): 16 orphan inodes deleted [ 9.487420] EXT4-fs (sda3): recovery complete [ 9.578600] EXT4-fs (sda3): mounted filesystem with ordered data mode. Opts: (null) [ 20.580539] Swap area shorter than signature indicates [ 20.588363] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 20.619443] udevd[330]: starting version 175 [ 20.649959] lp: driver loaded but no devices found [ 20.662972] [drm] Initialized drm 1.1.0 20060810 [ 20.675515] i915 0000:00:02.0: setting latency timer to 64 -- [ 72.288573] PM: thaw of drv:sr dev:3:0:0:0 complete after 178.143 msecs [ 72.288578] PM: thaw of drv:scsi_device dev:3:0:0:0 complete after 178.136 msecs [ 72.299677] PM: thaw of drv:scsi_device dev:2:0:0:0 complete after 189.270 msecs [ 72.309473] PM: thaw of devices complete after 202.763 msecs [ 72.309668] PM: writing image. [ 72.309670] PM: Cannot find swap device, try swapon -a. [ 72.309699] PM: Cannot get swap writer [ 72.329896] Restarting tasks ... done. [ 72.331777] PM: Basic memory bitmaps freed [ 72.331792] video LNXVIDEO:00: Restoring backlight state [ 72.420048] option1 ttyUSB0: option_instat_callback: error -84 [ 72.804047] option1 ttyUSB0: option_instat_callback: error -84 -- [ 145.960625] sd 7:0:0:0: Attached scsi generic sg2 type 0 [ 145.972036] sd 7:0:0:0: [sdb] Attached SCSI removable disk [ 172.430508] PPP BSD Compression module registered [ 172.455583] PPP Deflate Compression module registered [ 332.260789] type=1400 audit(1381814763.342:27): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=636 comm="cupsd" pid=636 comm="cupsd" capability=36 capname="block_suspend" [ 1913.030998] Swap area shorter than signature indicates [ 2022.530155] type=1400 audit(1381816453.610:28): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=636 comm="cupsd" pid=636 comm="cupsd" capability=36 capname="block_suspend" [ 4062.729509] Swap area shorter than signature indicates Please help. Thanks in advance. df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 14G 6.1G 7.0G 47% / udev 488M 4.0K 488M 1% /dev tmpfs 199M 868K 198M 1% /run none 5.0M 4.0K 5.0M 1% /run/lock none 496M 224K 496M 1% /run/shm

    Read the article

  • If the model is validating the data, shouldn't it throw exceptions on bad input?

    - by Carlos Campderrós
    Reading this SO question it seems that throwing exceptions for validating user input is frowned upon. But who should validate this data? In my applications, all validations are done in the business layer, because only the class itself really knows which values are valid for each one of its properties. If I were to copy the rules for validating a property to the controller, it is possible that the validation rules change and now there are two places where the modification should be made. Is my premise that validation should be done on the business layer wrong? What I do So my code usually ends up like this: <?php class Person { private $name; private $age; public function setName($n) { $n = trim($n); if (mb_strlen($n) == 0) { throw new ValidationException("Name cannot be empty"); } $this->name = $n; } public function setAge($a) { if (!is_int($a)) { if (!ctype_digit(trim($a))) { throw new ValidationException("Age $a is not valid"); } $a = (int)$a; } if ($a < 0 || $a > 150) { throw new ValidationException("Age $a is out of bounds"); } $this->age = $a; } // other getters, setters and methods } In the controller, I just pass the input data to the model, and catch thrown exceptions to show the error(s) to the user: <?php $person = new Person(); $errors = array(); // global try for all exceptions other than ValidationException try { // validation and process (if everything ok) try { $person->setAge($_POST['age']); } catch (ValidationException $e) { $errors['age'] = $e->getMessage(); } try { $person->setName($_POST['name']); } catch (ValidationException $e) { $errors['name'] = $e->getMessage(); } ... } catch (Exception $e) { // log the error, send 500 internal server error to the client // and finish the request } if (count($errors) == 0) { // process } else { showErrorsToUser($errors); } Is this a bad methodology? Alternate method Should maybe I create methods for isValidAge($a) that return true/false and then call them from the controller? <?php class Person { private $name; private $age; public function setName($n) { $n = trim($n); if ($this->isValidName($n)) { $this->name = $n; } else { throw new Exception("Invalid name"); } } public function setAge($a) { if ($this->isValidAge($a)) { $this->age = $a; } else { throw new Exception("Invalid age"); } } public function isValidName($n) { $n = trim($n); if (mb_strlen($n) == 0) { return false; } return true; } public function isValidAge($a) { if (!is_int($a)) { if (!ctype_digit(trim($a))) { return false; } $a = (int)$a; } if ($a < 0 || $a > 150) { return false; } return true; } // other getters, setters and methods } And the controller will be basically the same, just instead of try/catch there are now if/else: <?php $person = new Person(); $errors = array(); if ($person->isValidAge($age)) { $person->setAge($age); } catch (Exception $e) { $errors['age'] = "Invalid age"; } if ($person->isValidName($name)) { $person->setName($name); } catch (Exception $e) { $errors['name'] = "Invalid name"; } ... if (count($errors) == 0) { // process } else { showErrorsToUser($errors); } So, what should I do? I'm pretty happy with my original method, and my colleagues to whom I have showed it in general have liked it. Despite this, should I change to the alternate method? Or am I doing this terribly wrong and I should look for another way?

    Read the article

  • Asynchrony in C# 5 (Part II)

    - by javarg
    This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information. So, let’s continue with TPL Dataflow: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part II: TPL Dataflow Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.” This means: data manipulation processed asynchronously. “TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”. Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc.  Dataflow Blocks TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>). So, let’s review data processing blocks available. Blocks a categorized into three groups: Buffering Blocks Executor Blocks Joining Blocks Think of them as electronic circuitry components :).. 1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it). 2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary. 3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value. 4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue. You could also specify how many parallel executions to allow (degree of parallelism). 5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order. 6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input. 7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs). 8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.). 9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>). Next time I will show some examples of usage for each TDF block. * Images taken from Microsoft’s Async CTP documentation.

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Selenium &ndash; Use Data Driven tests to run in multiple browsers and sizes

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/11/04/selenium-ndash-use-data-driven-tests-to-run-in-multiple.aspxSelenium uses WebDriver (or is it the same? I’m still learning how it is connected) to run Automated UI tests in many different browsers. For example, you can run the same test in Chrome and Firefox and in a smaller sized Chrome browser. The permutations can grow quickly. One way to get them to run in MStest is to create  a test method for each test (ie ChromeDeleteItem_Small, ChromeDeleteItem_Large, FFDeleteItem_Small, FFDeleteItem_Large) that each call  the same base method, passing in the browser and size you’d like. This approach was causing a lot of duplicate code, so I decided to use the data driven approach, common to Coded UI or Unit test methods. 1. Create a class with a test method. 2. Create a csv with two columns: BrowserType, BrowserSize 3. Add rows for each permutation: Chrome, Large | Chrome, Small | Firefox, Large | Firefox, Small | IE, Large | IE, Small | *** 4. Add the csv to the Visual Studio Project. 5. Set the Copy to output directory to Copy always 6. Add the attribute: [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\TestMatrix.csv", "TestMatrix#csv", DataAccessMethod.Sequential), DeploymentItem("TestMatrix.csv")] 7. Run the test in the test explorer Example:[CodedUITest] public class AllTasksTests : TasksTestBase { [TestMethod] [TestCategory("Tasks")] [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\TestMatrix.csv", "TestMatrix#csv", DataAccessMethod.Sequential), DeploymentItem("TestMatrix.csv")] public void CreateTask() { this.PrepForDataDrivenTest(); base.CreateTaskTest("New Task"); } } protected void PrepForDataDrivenTest() { var browserType = this.ParseBrowserType(Context.DataRow["BrowserType"].ToString()); var browserSize = this.ParseBrowserSize(Context.DataRow["BrowserSize"].ToString()); this.BrowserType = browserType; this.BrowserSize = browserSize; Trace.WriteLine("browser: " + browserType.ToString()); Trace.WriteLine("browser size: " + browserSize.ToString()); } /// <summary> /// Get the enum value from the string /// </summary> /// <param name="browserType">Chrome, Firefox, or IE</param> /// <returns>The browser type.</returns> private BrowserType ParseBrowserType(string browserType) { return (UITestFramework.BrowserType)Enum.Parse(typeof(UITestFramework.BrowserType), browserType, true); } /// <summary> /// Get the browser size enum value from the string /// </summary> /// <param name="browserSize">Small, Medium, Large</param> /// <returns>the browser size</returns> private BrowserSizeEnum ParseBrowserSize(string browserSize) { return (BrowserSizeEnum)Enum.Parse(typeof(BrowserSizeEnum), browserSize, true); }/// <summary> /// Change the browser to the size based on the enum. /// </summary> /// <param name="browserSize">The BrowserSizeEnum value to resize the window to.</param> private void ResizeBrowser(BrowserSizeEnum browserSize) { switch (browserSize) { case BrowserSizeEnum.Large: this.driver.Manage().Window.Maximize(); break; case BrowserSizeEnum.Medium: this.driver.Manage().Window.Size = new Size(800, this.driver.Manage().Window.Size.Height); break; case BrowserSizeEnum.Small: this.driver.Manage().Window.Size = new Size(500, this.driver.Manage().Window.Size.Height); break; default: break; } }/// <summary> /// Browser sizes for automation testing /// </summary> public enum BrowserSizeEnum { /// <summary> /// Large size, Maximized to the desktop /// </summary> Large, /// <summary> /// Similar to tablets /// </summary> Medium, /// <summary> /// Phone sizes... 610px and smaller /// </summary> Small } Hope it helps!

    Read the article

  • Is this spaghetti code already? [migrated]

    - by hephestos
    I post the following code writen all by hand. Why I have the feeling that it is a western spaghetti on its own. Second, could that be written better? <div id="form-board" class="notice" style="height: 200px; min-height: 109px; width: auto;display: none;"> <script type="text/javascript"> jQuery(document).ready(function(){ $(".form-button-slide").click(function(){ $( "#form-board" ).dialog(); return false; }); }); </script> <?php echo $this->Form->create('mysubmit'); echo $this->Form->input('inputs', array('type' => 'select', 'id' => 'inputs', 'options' => $inputs)); echo $this->Form->input('Fields', array('type' => 'select', 'id' => 'fields', 'empty' => '-- Pick a state first --')); echo $this->Form->input('inputs2', array('type' => 'select', 'id' => 'inputs2', 'options' => $inputs2)); echo $this->Form->input('Fields2', array('type' => 'select', 'id' => 'fields2', 'empty' => '-- Pick a state first --')); echo $this->Form->end("Submit"); ?> </div> <div style="width:100%"></div> <div class="form-button-slide" style="float:left;display:block;"> <?php echo $this->Html->link("Error Results", "#"); ?> </div> <script type="text/javascript"> jQuery(document).ready(function(){ $("#mysubmitIndexForm").submit(function() { // we want to store the values from the form input box, then send via ajax below jQuery.post("Staffs/view", { data1: $("#inputs").attr('value'), data2:$("#inputs2").attr('value'),data3:$("#fields").attr('value'), data4:$("#fields2").attr('value') } ); //Close the dialog $( "#form-board" ).dialog('close') return false; }); $("#inputs").change(function() { // we want to store the values from the form input box, then send via ajax below var input_id = $('#inputs').attr('value'); $.ajax({ type: "POST", //The controller who listens to our request url: "Inputs/getFieldsFromOneInput/"+input_id, data: "input_id="+ input_id, //+"&amp; lname="+ lname, success: function(data){//function on success with returned data $('form#mysubmit').hide(function(){}); data = $.parseJSON(data); var sel = $("#fields"); sel.empty(); for (var i=0; i<data.length; i++) { sel.append('<option value="' + data[i].id + '">' + data[i].name + '</option>'); } } }); return false; }); $("#inputs2").change(function() { // we want to store the values from the form input box, then send via ajax below var input_id = $('#inputs2').attr('value'); $.ajax({ type: "POST", //The controller who listens to our request url: "Inputs/getFieldsFromOneInput/"+input_id, data: "input_id="+ input_id, //+"&amp; lname="+ lname, success: function(data){//function on success with returned data $('form#mysubmit').hide(function(){}); data = $.parseJSON(data); var sel = $("#fields2"); sel.empty(); for (var i=0; i<data.length; i++) { sel.append('<option value="' + data[i].id + '">' + data[i].name + '</option>'); } } }); return false; }); }); </script>

    Read the article

  • How to convert html textfield/area data to server-side txt file? [closed]

    - by olijake
    How can I make a script that will convert the text/data in a html textfield/textarea and send it to the server, which then saves it as a .txt file for storage? NOTE: I am hosting a website(for testing purposes) using Apache 2.2 on a Windows 7 machine. I downloaded PHP version 5.4.7, but have not yet installed on my server yet (not sure if I will need it, but also not sure how to install it). 1st problem: Saving text to server Html page/section with title textfield, text textarea, and submit button. You would enter a title, the text/notes you need in the textfield, then press the submit button to have it store the text in the textarea, as a .txt file on the server called .txt. 2nd problem: Opening text from server Html with list of all txt files OR textfield for entering in title, then submit button to send the title of the requested .txt file to the server, which would then load it up on the page. Here is what I have so far: (let me know if there is something that I should change or if something just isn't correct in the index.html code I have right now.) <!DOCTYPE HTML> <html> <head> <title>Insert Title</title> <meta http-equiv="Content-Type" content="Text/HTML; charset=UTF-8"/> </head> <body> <form method="post" action="save.INSERT_FILETYPE" name="textfile" enctype="multipart/form-data"> <input type="text" name="title"><br/> <textarea rows="20" cols="100" id="text" name="text"></textarea><br/> <input type="submit" name="submit" value="Submit Text to Server"> </form><br/> <hr style="width: 100%; height: 4px;"><br/> <form method="post" action="open.INSERT_FILETYPE" name="textfile" enctype="multipart/form-data"> <input type="text" name="title"><br/> <input type="submit" name="submit" value="Submit Txt File Request"> </form><br/> <div>Opened text file displays here or goes on another page</div> </body> </html> I plan on using a server side language/script, but ANYTHING that gets the job done is fine. I already tried looking into using some ASP/jScript/PHP, but have had some trouble implementing it into my server. (ie: getting the modules loaded and telling the server what file types to parse.) I know this may be an extremely easy fix, but then in that case, hopefully you wouldn't mind helping me out a little :). If it turns out that this is MUCH more complicated than I expect, then feel free to let me know that, so I don't waste me time running in circles. I appreciate any help/assistance that you can provide, Thanks, Jake EDIT: Wrong Apache version. In response to the comments/closing of this thread: My question: "How exactly do I install the PHP module on the apache server? and is this even possible? and is this even recommended?" ^ In case I wasn't clear enough already To Clarify: I understand the basics of PHP, I just have trouble with INSTALLING PHP on the apache server. (I have used PHP before, but never successfully on apache (so far...)) For my script I wrote something similar to this already (using fopen() and a few other commands): <?php fopen("notes.txt", "r"); file_put_contents("notes.txt",teststring1); ?> I have used javascript for this task before also (although I prefer using PHP and server-side languages): <script language="javascript"> function WriteToFile(){ var fso = new ActiveXObject("Scripting.FileSystemObject"); var s = fso.CreateTextFile("C:\\NewFile.txt", true); var text=document.getElementById("TextArea1").innerText; s.WriteLine(text); s.WriteLine('***********************'); s.Close(); } </script>

    Read the article

  • KVM Guest installed from console. But how to get to the guest's console?

    - by badbishop
    I'm trying to install a fully virtualized guest (Fedora 14 x86_64) on KVM (RHEL 6), using command-line only (both hypervisor and guest). It goes without errors, and without a tangible result . I'd like to know how to do a text-only installation. So, here's what I've done: # virt-install \ --name=FE --ram=756 --vcpus=1 \ --file=/var/lib/libvirt/images/FE.img --network bridge:br0 \ --nographics --os-type=linux \ --extra-args='console=tty0' -v \ --cdrom=/media/usb/Fedora-14-x86_64-Live-Desktop.iso Starting install... Creating domain... | 0 B 00:00 Connected to domain FE Escape character is ^] ÿ Now what? As I understand after googling for a couple of days, I should see the guest's output from the text installation, but nothing happens. virt-viewer cannot connect to it, kindly suggesting that I explore all the options by adding --help (which I did). If I reconnect with virsh, I see this: Domain installation still in progress. You can reconnect to the console to complete the installation process. [root@v ~] # virsh console FEConnected to domain FE Escape character is ^] This shows that VM is running # virsh list Id Name State ---------------------------------- 8 FE running Qemu log: LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin /usr/libexec/qemu-kvm -S -M rhel6.0.0 -enable-kvm -m 756 -smp 1,sockets=1,cores=1,threads=1 -name FE -uuid 6989d008-7c89-424c-d2d3-f41235c57a18 -nographic -nodefconfig -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/FE.monitor,server,nowait -mon chardev=monitor,mode=control -rtc base=utc -no-reboot -boot d -drive file=/var/lib/libvirt/images/FE.img,if=none,id=drive-ide0-0-0,format=raw,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive file=/media/usb/Fedora-14-x86_64-Live-Desktop.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=20,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:0a:65:8d,bus=pci.0,addr=0x2 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 char device redirected to /dev/pts/1 Output of /etc/libvirt/qemu/FE.xml # cat /etc/libvirt/qemu/FE.xml <domain type='kvm'> <name>FE</name> <uuid>6989d008-7c89-424c-d2d3-f41235c57a18</uuid> <memory>774144</memory> <currentMemory>774144</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='rhel6.0.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/FE.img'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='1' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:0a:65:8d'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target port='0'/> </console> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </memballoon> </devices> </domain> I'm obviously missing something that many others don't, but what is it? Thanx in advance!

    Read the article

  • FOSUserBundle override mapping to remove need for username

    - by musoNic80
    I want to remove the need for a username in the FOSUserBundle. My users will login using an email address only and I've added real name fields as part of the user entity. I realised that I needed to redo the entire mapping as described here. I think I've done it correctly but when I try to submit the registration form I get the error: "Only field names mapped by Doctrine can be validated for uniqueness." The strange thing is that I haven't tried to assert a unique constraint to anything in the user entity. Here is my full user entity file: <?php // src/MyApp/UserBundle/Entity/User.php namespace MyApp\UserBundle\Entity; use FOS\UserBundle\Model\User as BaseUser; use Doctrine\ORM\Mapping as ORM; use Symfony\Component\Validator\Constraints as Assert; /** * @ORM\Entity * @ORM\Table(name="depbook_user") */ class User extends BaseUser { /** * @ORM\Id * @ORM\Column(type="integer") * @ORM\GeneratedValue(strategy="AUTO") */ protected $id; /** * @ORM\Column(type="string", length=255) * * @Assert\NotBlank(message="Please enter your first name.", groups={"Registration", "Profile"}) * @Assert\MaxLength(limit="255", message="The name is too long.", groups={"Registration", "Profile"}) */ protected $firstName; /** * @ORM\Column(type="string", length=255) * * @Assert\NotBlank(message="Please enter your last name.", groups={"Registration", "Profile"}) * @Assert\MaxLength(limit="255", message="The name is too long.", groups={"Registration", "Profile"}) */ protected $lastName; /** * @ORM\Column(type="string", length=255) * * @Assert\NotBlank(message="Please enter your email address.", groups={"Registration", "Profile"}) * @Assert\MaxLength(limit="255", message="The name is too long.", groups={"Registration", "Profile"}) * @Assert\Email(groups={"Registration"}) */ protected $email; /** * @ORM\Column(type="string", length=255, name="email_canonical", unique=true) */ protected $emailCanonical; /** * @ORM\Column(type="boolean") */ protected $enabled; /** * @ORM\Column(type="string") */ protected $salt; /** * @ORM\Column(type="string") */ protected $password; /** * @ORM\Column(type="datetime", nullable=true, name="last_login") */ protected $lastLogin; /** * @ORM\Column(type="boolean") */ protected $locked; /** * @ORM\Column(type="boolean") */ protected $expired; /** * @ORM\Column(type="datetime", nullable=true, name="expires_at") */ protected $expiresAt; /** * @ORM\Column(type="string", nullable=true, name="confirmation_token") */ protected $confirmationToken; /** * @ORM\Column(type="datetime", nullable=true, name="password_requested_at") */ protected $passwordRequestedAt; /** * @ORM\Column(type="array") */ protected $roles; /** * @ORM\Column(type="boolean", name="credentials_expired") */ protected $credentialsExpired; /** * @ORM\Column(type="datetime", nullable=true, name="credentials_expired_at") */ protected $credentialsExpiredAt; public function __construct() { parent::__construct(); // your own logic } /** * @return string */ public function getFirstName() { return $this->firstName; } /** * @return string */ public function getLastName() { return $this->lastName; } /** * Sets the first name. * * @param string $firstname * * @return User */ public function setFirstName($firstname) { $this->firstName = $firstname; return $this; } /** * Sets the last name. * * @param string $lastname * * @return User */ public function setLastName($lastname) { $this->lastName = $lastname; return $this; } } I've seen various suggestions about this but none of the suggestions seem to work for me. The FOSUserBundle docs are very sparse about what must be a very common request.

    Read the article

  • I am getting the error Wrong type argument to unary minus and Expected ';' before ':' token

    - by James B.
    I am getting the error Wrong type argument to unary minus and Expected ';' before ':' token The error occurs at the - (NSIndexPath *).... line I am really New at this, so if there is anymore info needed, please ask, if you need to see the entire app, please e-mail me @ james at sevenotwo dot com. the app isn't really complicated. it is based on the sample code on Apple's website for the iphonedatacorerecipes code. #pragma mark - #pragma mark Editing rows - (NSIndexPath *)tableView:(UITableView *)tableView willSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSIndexPath *rowToSelect = indexPath; NSInteger section = indexPath.section; BOOL isEditing = self.editing; // If editing, don't allow notes to be selected // Not editing: Only allow notes to be selected if ((isEditing && section == NOTES_SECTION) || (!isEditing && section != NOTES_SECTION)) { [tableView deselectRowAtIndexPath:indexPath animated:YES]; rowToSelect = nil; } return rowToSelect; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSInteger section = indexPath.section; UIViewController *nextViewController = nil; /* What to do on selection depends on what section the row is in. For Type, Notes, and Instruments, create and push a new view controller of the type appropriate for the next screen. */ switch (section) { case TYPE_SECTION: nextViewController = [[TypeSelectionViewController alloc] initWithStyle:UITableViewStyleGrouped]; ((TypeSelectionViewController *)nextViewController).doctor = doctor; break; case NOTES_SECTION: nextViewController = [[NotesViewController alloc] initWithNibName:@"NotesView" bundle:nil]; ((NotesViewController *)nextViewController).doctor = doctor; break; case INSTRUMENTS_SECTION: nextViewController = [[InstrumentDetailViewController alloc] initWithStyle:UITableViewStyleGrouped]; ((InstrumentDetailViewController *)nextViewController).doctor = doctor; if (indexPath.row < [doctor.instruments count]) { Instrument *instrument = [instruments objectAtIndex:indexPath.row]; ((InstrumentDetailViewController *)nextViewController).instrument = instrument; } break; default: break; } // If we got a new view controller, push it . if (nextViewController) { [self.navigationController pushViewController:nextViewController animated:YES]; [nextViewController release]; } } - (UITableViewCellEditingStyle)tableView:(UITableView *)tableView editingStyleForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCellEditingStyle style = UITableViewCellEditingStyleNone; // Only allow editing in the instruments section. // In the instruments section, the last row (row number equal to the count of instruments) is added automatically (see tableView:cellForRowAtIndexPath:) to provide an insertion cell, so configure that cell for insertion; the other cells are configured for deletion. if (indexPath.section == INSTRUMENTS_SECTION) { // If this is the last item, it's the insertion row. if (indexPath.row == [doctor.instruments count]) { style = UITableViewCellEditingStyleInsert; } else { style = UITableViewCellEditingStyleDelete; } } return style; } - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { // Only allow deletion, and only in the instruments section if ((editingStyle == UITableViewCellEditingStyleDelete) && (indexPath.section == INSTRUMENTS_SECTION)) { // Remove the corresponding instrument object from the doctor's instrument list and delete the appropriate table view cell. Instrument *instrument = [instruments objectAtIndex:indexPath.row]; [doctor removeInstrumentsObject:instrument]; [instruments removeObject:instrument]; NSManagedObjectContext *context = instrument.managedObjectContext; [context deleteObject:instrument]; [self.tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationTop]; } }

    Read the article

  • Why can't I reference child entities with part of the parent entities composite key

    - by tigermain
    I am trying to reference some child entities with part of the parents composite key not all of it, why cant I? This happens when I use the following mapping instead of that which is commented. I get the following error Foreign key in table VolatileEventContent must have same number of columns as referenced primary key in table LocationSearchView <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="JeanieMaster.Domain.Entities" assembly="JeanieMaster.Domain"> <class name="LocationSearchView" table="LocationSearchView"> <composite-id> <key-property name="LocationId" type="Int32"></key-property> <key-property name="ContentProviderId" type="Int32"></key-property> <key-property name="CategoryId" type="Int32"></key-property> </composite-id> <property name="CompanyName" type="String" not-null="true" update="false" insert="false"/> <property name="Description" type="String" not-null="true" update="false" insert="false"/> <property name="CategoryId" type="Int32" not-null="true" update="false" insert="false"/> <property name="ContentProviderId" type="Int32" not-null="true" update="false" insert="false"/> <property name="LocationId" type="Int32" not-null="true" update="false" insert="false"/> <property name="Latitude" type="Double" update="false" insert="false" /> <property name="Longitude" type="Double" update="false" insert="false" /> <bag name="Events" table="VolatileEventContent" where="DeactivatedOn IS NULL" order-by="StartDate DESC" lazy="false" cascade="none"> <key> <column name="LocationId"></column> <column name="ContentProviderId"></column> <!--<column name="LocationId"></column> <column name="ContentProviderId"></column> <column name="CategoryId"></column>--> </key> <one-to-many class="Event" column="VolatileEventContentId"></one-to-many> </bag> </class> </hibernate-mapping> And VolatileEventContent mapping file <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="JeanieMaster.Domain.Entities" assembly="JeanieMaster.Domain"> <class name="Event" table="VolatileEventContent" select-before-update="false" optimistic-lock="none"> <composite-id> <key-property name="LocationId" type="Int32"></key-property> <key-property name="ContentProviderId" type="Int32"></key-property> </composite-id> <property name="Description" type="String" not-null="true" update="false" insert="false"/> <property name="StartDate" type="DateTime" not-null="true" update="false" insert="false" /> <property name="EndDate" type="DateTime" not-null="true" update="false" insert="false" /> <property name="CreatedOn" type="DateTime" not-null="true" update="false" insert="false" /> <property name="ModifiedOn" type="DateTime" not-null="false" update="false" insert="false" /> <many-to-one name="Location" class="Location" column="LocationId" /> <bag name="Artistes" table="EventArtiste" lazy="false" cascade="none"> <key name="VolatileEventContentId" /> <many-to-many class="Artiste" column="ArtisteId" ></many-to-many> </bag> </class> </hibernate-mapping>

    Read the article

  • How to determine if two generic type values are equal?

    - by comecme
    I'm trying to figure out how I can successfully determine if two generic type values are equal to each other. Based on Mark Byers' answer on this question I would think I can just use value.Equals() where value is a generic type. My actual problem is in a LinkedList implementation, but the problem can be shown with this simpler example. class GenericOjbect<T> { public T Value { get; private set; } public GenericOjbect(T value) { Value = value; } public bool Equals(T value) { return (Value.Equals(value)); } } Now I define an instance of GenericObject<StringBuilder> containing new StringBuilder("StackOverflow"). I would expect to get true if I call Equals(new StringBuilder("StackOverflow") on this GenericObject instance, but I get false. A sample program showing this: using System; using System.Text; class Program { static void Main() { var sb1 = new StringBuilder("StackOverflow"); var sb2 = new StringBuilder("StackOverflow"); Console.WriteLine("StringBuilder compare"); Console.WriteLine("1. == " + (sb1 == sb2)); Console.WriteLine("2. Object.Equals " + (Object.Equals(sb1, sb2))); Console.WriteLine("3. this.Equals " + (sb1.Equals(sb2))); var go1 = new GenericOjbect<StringBuilder>(sb1); var go2 = new GenericOjbect<StringBuilder>(sb2); Console.WriteLine("\nGenericObject compare"); Console.WriteLine("1. == " + (go1 == go2)); Console.WriteLine("2. Object.Equals " + (Object.Equals(go1, go2))); Console.WriteLine("3. this.Equals " + (go1.Equals(go2))); Console.WriteLine("4. Value.Equals " + (go1.Value.Equals(go2.Value))); } } For the three methods of comparing two StringBuilder objects, only the StringBuilder.Equals instance method (the third line) returns true. This is what I expected. But when comparing the GenericObject objects, its Equals() method (the third line) returns false. Interestingly enough, the fourth compare method does return true. I'd think the third and fourth comparison are actually doing the same thing. I would have expected true. Because in the Equals() method of the GenericObject class, both value and Value are of type T which in this case is a StringBuilder. Based on Mark Byers' answer in this question, I would've expected the Value.Equals() method to be using the StringBuilder's Equals() method. And as I've shown, the StringBuilder's Equal() method does return true. I've even tried public bool Equals(T value) { return EqualityComparer<T>.Default.Equals(Value, value); } but that also returns false. So, two questions here: Why doesn't the code return true? How could I implement the Equals method so it does return true?

    Read the article

  • Dual Boot issues with Windows 7 and Ubuntu

    - by Michael
    I'm finding myself in a rather unique situation. I've read through just about every resource I can find about this and while things have helped me understand some background, I haven't yet been able to find a solution. So I'm asking here. I originally had just a Windows 7 64-bit OS installation on my desktop. Learning that I couldn't do anything with Apache, PHP and MySql from within a 64-bit system, I did some research and found out that I could use Ubuntu. I've installed the latest version: 11.04. I created a CD to install Ubuntu from and the install went just fine. I installed it side-by-side with Windows 7. I can boot into Ubuntu just fine through the dual-boot option. When I reboot to load Windows though, the Grub2 list shows Windows 7 (loader) and when I select this option the Windows System Recovery loads instead of the actual OS. I haven't made it past there because I didn't know what to do. I just shut the computer down and rebooted into Ubuntu. I've been working for the last hour and a half to try to figure out how to boot into the Windows 7 OS and I haven't got a clue. While I'm somewhat proficient with Windows 7, I'm totally new to Ubuntu, so if you do know what needs to happen, please keep it simple enough that I'll be able to understand. Thanks for all your help in advance. Here's the results after using the Boot Info Script: Boot Info Script 0.55 dated February 15th, 2010 ============================= Boot Info Summary: ============================== => Grub 2 is installed in the MBR of /dev/sda and looks on the same drive in partition #5 for cbh. => Windows is installed in the MBR of /dev/sdb => Grub 2 is installed in the MBR of /dev/mapper/pdc_bdadcfbdif and looks on the same drive in partition #5 for cbh. sda1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy sda2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy sda3: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy sdb1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: /bootmgr /Boot/BCD sdb2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: sdb3: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: /bootmgr /boot/BCD sdb4: _________________________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sdb5: _________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 11.04 Boot files/dirs: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb6: _________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: pdc_bdadcfbdif1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: /bootmgr /Boot/BCD pdc_bdadcfbdif2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files/dirs: /bootmgr /Boot/BCD /Windows/System32/winload.exe pdc_bdadcfbdif3: _________________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy mount: unknown filesystem type '' =========================== Drive/Partition Info: ============================= Drive: sda ___________________ _____________________________________________________ Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/sda1 * 2,048 206,847 204,800 7 HPFS/NTFS /dev/sda2 206,911 1,440,372,735 1,440,165,825 7 HPFS/NTFS /dev/sda3 1,440,372,736 1,464,856,575 24,483,840 7 HPFS/NTFS Drive: sdb ___________________ _____________________________________________________ Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/sdb1 * 2,048 206,847 204,800 7 HPFS/NTFS /dev/sdb2 206,911 1,342,554,688 1,342,347,778 7 HPFS/NTFS /dev/sdb3 1,930,344,448 1,953,521,663 23,177,216 7 HPFS/NTFS /dev/sdb4 1,342,556,158 1,930,344,447 587,788,290 5 Extended /dev/sdb5 1,342,556,160 1,896,806,399 554,250,240 83 Linux /dev/sdb6 1,896,808,448 1,930,344,447 33,536,000 82 Linux swap / Solaris Drive: pdc_bdadcfbdif ___________________ _____________________________________________________ Disk /dev/mapper/pdc_bdadcfbdif: 750.0 GB, 749999947776 bytes 255 heads, 63 sectors/track, 91182 cylinders, total 1464843648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/mapper/pdc_bdadcfbdif1 * 2,048 206,847 204,800 7 HPFS/NTFS /dev/mapper/pdc_bdadcfbdif2 206,911 1,440,372,735 1,440,165,825 7 HPFS/NTFS /dev/mapper/pdc_bdadcfbdif3 1,440,372,736 1,464,856,575 24,483,840 7 HPFS/NTFS /dev/mapper/pdc_bdadcfbdif3 ends after the last sector of /dev/mapper/pdc_bdadcfbdif blkid -c /dev/null: ____________________________________________________________ Device UUID TYPE LABEL /dev/mapper/pdc_bdadcfbdif1 888E54CC8E54B482 ntfs SYSTEM /dev/mapper/pdc_bdadcfbdif2 C2766BF6766BEA1D ntfs OS /dev/mapper/pdc_bdadcfbdif: PTTYPE="dos" /dev/sda1 888E54CC8E54B482 ntfs SYSTEM /dev/sda2 C2766BF6766BEA1D ntfs OS /dev/sda3 BE6CA31D6CA2CF87 ntfs HP_RECOVERY /dev/sda promise_fasttrack_raid_member /dev/sdb1 20B65685B6565B7C ntfs SYSTEM /dev/sdb2 B4467A314679F508 ntfs HP /dev/sdb3 6E10B7A410B77227 ntfs FACTORY_IMAGE /dev/sdb4: PTTYPE="dos" /dev/sdb5 266f9801-cf4f-4acc-affa-2092be035f0c ext4 /dev/sdb6 1df35749-a887-45ff-a3de-edd52239847d swap /dev/sdb: PTTYPE="dos" error: /dev/mapper/pdc_bdadcfbdif3: No such file or directory error: /dev/sdc: No medium found error: /dev/sdd: No medium found error: /dev/sde: No medium found error: /dev/sdf: No medium found error: /dev/sdg: No medium found ============================ "mount | grep ^/dev output: =========================== Device Mount_Point Type Options /dev/sdb5 / ext4 (rw,errors=remount-ro,commit=0) =========================== sdb5/boot/grub/grub.cfg: =========================== # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm fi terminal_output gfxterm insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 2.6.38-8-generic-pae' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c linux /boot/vmlinuz-2.6.38-8-generic-pae root=UUID=266f9801-cf4f-4acc- affa-2092be035f0c ro quiet splash vt.handoff=7 initrd /boot/initrd.img-2.6.38-8-generic-pae } menuentry 'Ubuntu, with Linux 2.6.38-8-generic-pae (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c echo 'Loading Linux 2.6.38-8-generic-pae ...' linux /boot/vmlinuz-2.6.38-8-generic-pae root=UUID=266f9801-cf4f-4acc-affa-2092be035f0c ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.38-8-generic-pae } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sdb1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(/dev/sdb,msdos1)' search --no-floppy --fs-uuid --set=root 20B65685B6565B7C chainloader +1 } menuentry "Windows Recovery Environment (loader) (on /dev/sdb3)" --class windows --class os { insmod part_msdos insmod ntfs set root='(/dev/sdb,msdos3)' search --no-floppy --fs-uuid --set=root 6E10B7A410B77227 drivemap -s (hd0) ${root} chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### =============================== sdb5/etc/fstab: =============================== # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdb5 during installation UUID=266f9801-cf4f-4acc-affa-2092be035f0c / ext4 errors=remount-ro 0 1 # swap was on /dev/sdb6 during installation UUID=1df35749-a887-45ff-a3de-edd52239847d none swap sw 0 0 =================== sdb5: Location of files loaded by Grub: =================== 900.1GB: boot/grub/core.img 825.0GB: boot/grub/grub.cfg 688.7GB: boot/initrd.img-2.6.38-8-generic-pae 688.0GB: boot/vmlinuz-2.6.38-8-generic-pae 688.7GB: initrd.img 688.0GB: vmlinuz =========================== Unknown MBRs/Boot Sectors/etc ======================= Unknown BootLoader on pdc_bdadcfbdif3 =======Devices which don't seem to have a corresponding hard drive============== sdc sdd sde sdf sdg =============================== StdErr Messages: =============================== ERROR: dos: partition address past end of RAID device hexdump: /dev/mapper/pdc_bdadcfbdif3: No such file or directory hexdump: /dev/mapper/pdc_bdadcfbdif3: No such file or directory ERROR: dos: partition address past end of RAID device

    Read the article

  • Cannot SSH after resetting firewall on VPS

    - by Thomas Buckley
    I'm having trouble trying to SSH to my Debian 5 VPS with blacknight. It was working fine until I did the following: Logged into 'Parallels Infrastructure Manager' - Container - Firewall - Set to 'Normal Firewall settings'. It told me there was an error with the IPTables and offered the option again with a checkbox to 'reset' firewall settings, I selected this. I can see that that the default rules are been applied ( anything from anyone on any port and allowing anything to happen). Whenever I attempt to SSH I get the following debug info: thomas@localmachine:~/.ssh$ ssh -v thomas@hostname OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to hostname [***********] port 22. debug1: Connection established. debug1: identity file /home/thomas/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-4096 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-4096 debug1: identity file /home/thomas/.ssh/id_rsa-cert type -1 debug1: identity file /home/thomas/.ssh/id_dsa type -1 debug1: identity file /home/thomas/.ssh/id_dsa-cert type -1 debug1: identity file /home/thomas/.ssh/id_ecdsa type -1 debug1: identity file /home/thomas/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA ************************************* debug1: Host 'hostname' is known and matches the RSA host key. debug1: Found key in /home/thomas/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/thomas/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /home/thomas/.ssh/id_dsa debug1: Trying private key: /home/thomas/.ssh/id_ecdsa debug1: No more authentication methods to try. Permission denied (publickey). I had my public/private RSA keys set up and working fine before I reset the firewall settings. I had also made the following changes to my /etc/ssh/sshd_config file on the VPS: PermitRootLogin no PasswordAuthentication no X11Forwarding no UsePAM no UseDNS no AllowUsers thomas Could it be something to do with the SSH server & client having different versions between my local machine and VPS? Any help appreciated. Output with ssh -vvv thomas@localcomputer:~/.ssh$ ssh -vvv thomas@**************** OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ************ [*************] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/thomas/.ssh/id_rsa" as a RSA1 public key debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'Proc-Type:' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'DEK-Info:' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/thomas/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-4096 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-4096 debug1: identity file /home/thomas/.ssh/id_rsa-cert type -1 debug1: identity file /home/thomas/.ssh/id_dsa type -1 debug1: identity file /home/thomas/.ssh/id_dsa-cert type -1 debug1: identity file /home/thomas/.ssh/id_ecdsa type -1 debug1: identity file /home/thomas/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "*****************" from file "/home/thomas/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /home/thomas/.ssh/known_hosts:1 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],ssh-rsa debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],ssh-rsa,[email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 127/256 debug2: bits set: 498/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA *********************************************************** debug3: load_hostkeys: loading entries for host "*********************" from file "/home/thomas/.ssh/known_hosts" debug3: load_hostkeys: found key type RSA in file /home/thomas/.ssh/known_hosts:1 debug3: load_hostkeys: loaded 1 keys debug1: Host '****************' is known and matches the RSA host key. debug1: Found key in /home/thomas/.ssh/known_hosts:1 debug2: bits set: 516/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/thomas/.ssh/id_rsa (0x7fa7028b6010) debug2: key: /home/thomas/.ssh/id_dsa ((nil)) debug2: key: /home/thomas/.ssh/id_ecdsa ((nil)) debug1: Authentications that can continue: publickey debug3: start over, passed a different list publickey debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/thomas/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug1: Trying private key: /home/thomas/.ssh/id_dsa debug3: no such identity: /home/thomas/.ssh/id_dsa debug1: Trying private key: /home/thomas/.ssh/id_ecdsa debug3: no such identity: /home/thomas/.ssh/id_ecdsa debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey). sshd_config # Package generated configuration file # See the sshd(8) manpage for details # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) C hallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords PasswordAuthentication no # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding no X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM no UseDNS no AllowUsers thomas Thanks

    Read the article

  • This web part does not have a valid XSLT stylesheet: There is no XSL property available for presenting the data.

    - by Patrick Olurotimi Ige
    I have been thinking for a while how i can reuse my code when building custom dataview webparts in sharepoint designer 2010.So i decided to use the XslLink which is one of the properties when you edit a sharepoint webpart.I started by creating a xsl file that i can use but after adding the link to the file like so:<XslLink>sites/server/mycustomtemplate.xsl</XslLink>I get the error : This web part does not have  a valid XSLT stylesheet: There is no XSL property available for presenting the data.So after some debugging i noticed it was the directory path for the link to the XSL style shee gets broken.So i changed it to  the full URL  http://mysite/sites/server/mycustomtemplate.xsl it works Enjoy

    Read the article

  • Passing multiple simple POST Values to ASP.NET Web API

    - by Rick Strahl
    A few weeks backs I posted a blog post  about what does and doesn't work with ASP.NET Web API when it comes to POSTing data to a Web API controller. One of the features that doesn't work out of the box - somewhat unexpectedly -  is the ability to map POST form variables to simple parameters of a Web API method. For example imagine you have this form and you want to post this data to a Web API end point like this via AJAX: <form> Name: <input type="name" name="name" value="Rick" /> Value: <input type="value" name="value" value="12" /> Entered: <input type="entered" name="entered" value="12/01/2011" /> <input type="button" id="btnSend" value="Send" /> </form> <script type="text/javascript"> $("#btnSend").click( function() { $.post("samples/PostMultipleSimpleValues?action=kazam", $("form").serialize(), function (result) { alert(result); }); }); </script> or you might do this more explicitly by creating a simple client map and specifying the POST values directly by hand:$.post("samples/PostMultipleSimpleValues?action=kazam", { name: "Rick", value: 1, entered: "12/01/2012" }, $("form").serialize(), function (result) { alert(result); }); On the wire this generates a simple POST request with Url Encoded values in the content:POST /AspNetWebApi/samples/PostMultipleSimpleValues?action=kazam HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: application/json Connection: keep-alive Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://localhost/AspNetWebApi/FormPostTest.html Content-Length: 41 Pragma: no-cache Cache-Control: no-cachename=Rick&value=12&entered=12%2F10%2F2011 Seems simple enough, right? We are basically posting 3 form variables and 1 query string value to the server. Unfortunately Web API can't handle request out of the box. If I create a method like this:[HttpPost] public string PostMultipleSimpleValues(string name, int value, DateTime entered, string action = null) { return string.Format("Name: {0}, Value: {1}, Date: {2}, Action: {3}", name, value, entered, action); }You'll find that you get an HTTP 404 error and { "Message": "No HTTP resource was found that matches the request URI…"} Yes, it's possible to pass multiple POST parameters of course, but Web API expects you to use Model Binding for this - mapping the post parameters to a strongly typed .NET object, not to single parameters. Alternately you can also accept a FormDataCollection parameter on your API method to get a name value collection of all POSTed values. If you're using JSON only, using the dynamic JObject/JValue objects might also work. ModelBinding is fine in many use cases, but can quickly become overkill if you only need to pass a couple of simple parameters to many methods. Especially in applications with many, many AJAX callbacks the 'parameter mapping type' per method signature can lead to serious class pollution in a project very quickly. Simple POST variables are also commonly used in AJAX applications to pass data to the server, even in many complex public APIs. So this is not an uncommon use case, and - maybe more so a behavior that I would have expected Web API to support natively. The question "Why aren't my POST parameters mapping to Web API method parameters" is already a frequent one… So this is something that I think is fairly important, but unfortunately missing in the base Web API installation. Creating a Custom Parameter Binder Luckily Web API is greatly extensible and there's a way to create a custom Parameter Binding to provide this functionality! Although this solution took me a long while to find and then only with the help of some folks Microsoft (thanks Hong Mei!!!), it's not difficult to hook up in your own projects. It requires one small class and a GlobalConfiguration hookup. Web API parameter bindings allow you to intercept processing of individual parameters - they deal with mapping parameters to the signature as well as converting the parameters to the actual values that are returned. Here's the implementation of the SimplePostVariableParameterBinding class:public class SimplePostVariableParameterBinding : HttpParameterBinding { private const string MultipleBodyParameters = "MultipleBodyParameters"; public SimplePostVariableParameterBinding(HttpParameterDescriptor descriptor) : base(descriptor) { } /// <summary> /// Check for simple binding parameters in POST data. Bind POST /// data as well as query string data /// </summary> public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, HttpActionContext actionContext, CancellationToken cancellationToken) { // Body can only be read once, so read and cache it NameValueCollection col = TryReadBody(actionContext.Request); string stringValue = null; if (col != null) stringValue = col[Descriptor.ParameterName]; // try reading query string if we have no POST/PUT match if (stringValue == null) { var query = actionContext.Request.GetQueryNameValuePairs(); if (query != null) { var matches = query.Where(kv => kv.Key.ToLower() == Descriptor.ParameterName.ToLower()); if (matches.Count() > 0) stringValue = matches.First().Value; } } object value = StringToType(stringValue); // Set the binding result here SetValue(actionContext, value); // now, we can return a completed task with no result TaskCompletionSource<AsyncVoid> tcs = new TaskCompletionSource<AsyncVoid>(); tcs.SetResult(default(AsyncVoid)); return tcs.Task; } private object StringToType(string stringValue) { object value = null; if (stringValue == null) value = null; else if (Descriptor.ParameterType == typeof(string)) value = stringValue; else if (Descriptor.ParameterType == typeof(int)) value = int.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int32)) value = Int32.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int64)) value = Int64.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(decimal)) value = decimal.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(double)) value = double.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(DateTime)) value = DateTime.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(bool)) { value = false; if (stringValue == "true" || stringValue == "on" || stringValue == "1") value = true; } else value = stringValue; return value; } /// <summary> /// Read and cache the request body /// </summary> /// <param name="request"></param> /// <returns></returns> private NameValueCollection TryReadBody(HttpRequestMessage request) { object result = null; // try to read out of cache first if (!request.Properties.TryGetValue(MultipleBodyParameters, out result)) { // parsing the string like firstname=Hongmei&lastname=Ge result = request.Content.ReadAsFormDataAsync().Result; request.Properties.Add(MultipleBodyParameters, result); } return result as NameValueCollection; } private struct AsyncVoid { } }   The ExecuteBindingAsync method is fired for each parameter that is mapped and sent for conversion. This custom binding is fired only if the incoming parameter is a simple type (that gets defined later when I hook up the binding), so this binding never fires on complex types or if the first type is not a simple type. For the first parameter of a request the Binding first reads the request body into a NameValueCollection and caches that in the request.Properties collection. The request body can only be read once, so the first parameter request reads it and then caches it. Subsequent parameters then use the cached POST value collection. Once the form collection is available the value of the parameter is read, and the value is translated into the target type requested by the Descriptor. SetValue writes out the value to be mapped. Once you have the ParameterBinding in place, the binding has to be assigned. This is done along with all other Web API configuration tasks at application startup in global.asax's Application_Start:GlobalConfiguration.Configuration.ParameterBindingRules .Insert(0, (HttpParameterDescriptor descriptor) => { var supportedMethods = descriptor.ActionDescriptor.SupportedHttpMethods; // Only apply this binder on POST and PUT operations if (supportedMethods.Contains(HttpMethod.Post) || supportedMethods.Contains(HttpMethod.Put)) { var supportedTypes = new Type[] { typeof(string), typeof(int), typeof(decimal), typeof(double), typeof(bool), typeof(DateTime) }; if (supportedTypes.Where(typ => typ == descriptor.ParameterType).Count() > 0) return new SimplePostVariableParameterBinding(descriptor); } // let the default bindings do their work return null; });   The ParameterBindingRules.Insert method takes a delegate that checks which type of requests it should handle. The logic here checks whether the request is POST or PUT and whether the parameter type is a simple type that is supported. Web API calls this delegate once for each method signature it tries to map and the delegate returns null to indicate it's not handling this parameter, or it returns a new parameter binding instance - in this case the SimplePostVariableParameterBinding. Once the parameter binding and this hook up code is in place, you can now pass simple POST values to methods with simple parameters. The examples I showed above should now work in addition to the standard bindings. Summary Clearly this is not easy to discover. I spent quite a bit of time digging through the Web API source trying to figure this out on my own without much luck. It took Hong Mei at Micrsoft to provide a base example as I asked around so I can't take credit for this solution :-). But once you know where to look, Web API is brilliantly extensible to make it relatively easy to customize the parameter behavior. I'm very stoked that this got resolved  - in the last two months I've had two customers with projects that decided not to use Web API in AJAX heavy SPA applications because this POST variable mapping wasn't available. This might actually change their mind to still switch back and take advantage of the many great features in Web API. I too frequently use plain POST variables for communicating with server AJAX handlers and while I could have worked around this (with untyped JObject or the Form collection mostly), having proper POST to parameter mapping makes things much easier. I said this in my last post on POST data and say it again here: I think POST to method parameter mapping should have been shipped in the box with Web API, because without knowing about this limitation the expectation is that simple POST variables map to parameters just like query string values do. I hope Microsoft considers including this type of functionality natively in the next version of Web API natively or at least as a built-in HttpParameterBinding that can be just added. This is especially true, since this binding doesn't affect existing bindings. Resources SimplePostVariableParameterBinding Source on GitHub Global.asax hookup source Mapping URL Encoded Post Values in  ASP.NET Web API© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • BizTalk 2009 - Naming Guidelines

    - by StuartBrierley
    The following is effectively a repost of the BizTalk 2004 naming guidlines that I have previously detailed.  I have posted these again for completeness under BizTalk 2009 and to allow an element of separation in case I find some reason to amend these for BizTalk 2009. These guidlines should be universal across any version of BizTalk you may wish to apply them to. General Rules All names should be named with a Pascal convention. Project Namespaces For message schemas: [CompanyName].XML.Schemas.[FunctionalName]* Examples:  ABC.XML.Schemas.Underwriting DEF.XML.Schemas.MarshmellowTradingExchange * Donates potential for multiple levels of functional name, such as Underwriting.Dictionary.Valuation For web services: [CompanyName].Web.Services.[FunctionalName] Examples: ABC.Web.Services.OrderJellyBeans For the main BizTalk Projects: [CompanyName].BizTalk.[AssemblyType].[FunctionalName]* Examples: ABC.BizTalk.Mappings.Underwriting ABC.BizTalk.Orchestrations.Underwriting * Donates potential for multiple levels of functional name, such as Mappings.Underwriting.Valuations Assemblies BizTalk Assembly names should match the associated Project Namespace, such as ABC.BizTalk.Mappings.Underwriting. This pertains to the formal assembly name and the DLL name. The Solution name should take the name of the main project within the solution, and also therefore the namespace for that project. Although long names such as this can be unwieldy to work with, the benefits of having the full scope available when the assemblies are installed on the target server are generally judged to outweigh this inconvenience. Messaging Artifacts Artifact Standard Notes Example Schema <DescriptiveName>.xsd   .NET Type name should match, without file extension.    .NET Namespace will likely match assembly name. PurchaseOrderAcknowledge_FF.xsd  or FNMA100330_FF.xsd Property Schema <DescriptiveName>.xsd Should be named to reflect possible common usage across multiple schemas  IspecMessagePropertySchema.xsd UnderwritingOrchestrationKeys.xsd Map <SourceSchema>2<DestinationSchema>.btm Exceptions to this may be made where the source and destination schemas share the majority of the name, such as in mainframe web service maps InstructionResponse2CustomEmailRequest.btm (exception example) AccountCustomerAddressSummaryRequest2MainframeRequest.btm Orchestration <DescriptiveName>.odx   GetValuationReports.odx SendMTEDecisionResponse.odx Send/Receive Pipeline <DescriptiveName>.btp   ValidatingXMLReceivePipeline.btp FlatFileAssembler.btp Receive Port A plainly worded phrase that will clearly explain the function.    FraudPreventionServices LetterProcessing   Receive Location A plainly worded phrase that will clearly explain the function.  ? Do we want to include the transport type here ? Arrears Web Service Send Port Group A plainly worded phrase that will clearly explain the function.   Customer Updates Send Port A plainly worded phrase that will clearly explain the function.    ABCProductUpdater LogLendingPolicyOutput Parties A meaningful name for a Trading Partner. If dealing with multiple entities within a Trading Partner organization, the Organization name could be used as a prefix.   Roles A meaningful name for the role that a Trading Partner plays.     Orchestration Workflow Shapes Shape Standard Notes Example Scopes <DescriptionOfContainedWork> or <DescOfcontainedWork><TxType>   Including info about transaction type may be appropriate in some situations where it adds significant documentation value to the diagram. HandleReportResponse         Receive Receive<MessageName> Typically, MessageName will be the same as the name of the message variable that is being received “into”. ReceiveReportResponse Send Send<MessageName> Typically, MessageName will be the same as the name of the message variable that is being sent. SendValuationDetailsRequest Expression <DescriptionOfEffect> Expression shapes should be named to describe the net effect of the expression, similar to naming a method.  The exception to this is the case where the expression is interacting with an external .NET component to perform a function that overlaps with existing BizTalk functionality – use closest BizTalk shape for this case. CreatePrintXML Decide <DescriptionOfDecision> A description of what will be decided in the “if” branch Report Type? Perform MF Save? If-Branch <DescriptionOfDecision> A (potentially abbreviated) description of what is being decided Mortgage Valuation Yes Else-Branch Else Else-branch shapes should always be named “Else” Else Construct Message (Assign) Create<Message> (for Construct)     <ExpressionDescription> (for expression) If a Construct shape contains a message assignment, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.    The actual message assignment shape contained should be named to describe the expression that is contained. CreateReportDataMV   which contains expression: ExtractReportData Construct Message (Transform) Create<Message> (for Construct)   <SourceSchema>2<DestSchema> (for transform) If a Construct shape contains a message transform, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.   The actual message transform shape contained should generally be named the same as the called map.  CreateReportDataMV   which contains transform: ReportDataMV2ReportDataMV                 Construct Message (containing multiple shapes)   If a Construct Message shape uses multiple assignments or transforms, the overall shape should be named to communicate the net effect, using no prefix.     Call/Start Orchestration Call<OrchestrationName>   Start<OrchestrationName>     Throw Throw<ExceptionType> The corresponding variable name for the exception type should (often) be the same name as the exception type, only camel-cased. ThrowRuleException, which references the “ruleException” variable.     Parallel <DescriptionOfParallelWork> Parallel shapes should be named by a description of what work will be done in parallel   Delay <DescriptionOfWhatWaitingFor> Delay shapes should be named by a description of what is being waited for.  POAcknowledgeTimeout Listen <DescriptionOfOutcomes> Listen shapes should be named by a description that captures (to the degree possible) all the branches of the Listen shape POAckOrTimeout FirstShippingBid Loop <DescriptionOfLoop> A (potentially abbreviated) description of what the loop is. ForEachValuationReport WhileErrorFlagTrue Role Link   See “Roles” in messaging naming conventions above.   Suspend <ReasonDescription> Describe what action an administrator must take to resume the orchestration.  More detail can be passed to error property – and should include what should be done by the administrator before resuming the orchestration. ReEstablishCreditLink Terminate <ReasonDescription> Describe why the orchestration terminated.  More detail can be passed to error property. TimeoutsExpired Call Rules Call<PolicyName> The policy name may need to be abbreviated. CallLendingPolicy Compensate Compensate or Compensate<TxName> If the shape compensates nested transactions, names should be suffixed with the name of the nested transaction – otherwise it should simple be Compensate. CompensateTransferFunds Orchestration Types Type Standard Notes Example Multi-Part Message Types <LogicalDocumentType>   Multi-part types encapsulate multiple parts.  The WSDL spec indicates “parts are a flexible mechanism for describing the logical abstract content of a message.”  The name of the multi-part type should correspond to the “logical” document type, i.e. what the sum of the parts describes. InvoiceReceipt   (which might encapsulate an invoice acknowledgement and a payment voucher.) Multi-Part Messsage Part <SchemaNameOfPart> Should be named (most often) simply for the schema (or simple type) associated with the part. InvoiceHeader Messages <SchemaName> or <MuliPartMessageTypeName> Should be named based on the corresponding schema type or multi-part message type.  If there is more than one variable of a type, name for its use within the orchestration. ReportDataMV UpdatedReportDataMV Variables <DescriptiveName>   TargetFilePath StringProcessor Port Types <FunctionDescription>PortType Should be named to suggest the nature of an endpoint, with pascal casing and suffixed with “PortType”.   If there will be more than one Port for a Port Type, the Port Type should be named according to the abstract service supplied.   The WSDL spec indicates port types are “a named set of abstract operations and the abstract messages involved” that also encapsulates the message pattern (i.e. one-way, request-response, solicit-response) that all operations on the port type adhere to. ReceiveReportResponsePortType  or CallEAEPortType (This is a two way port, so Receove or Send alone would not be appropriate.  Could have been ProcessEAERequestPortType etc....) Ports <FunctionDescription>Port Should be named to suggest a grouping of functionality, with pascal casing and suffixed with “Port.”  ReceiveReportResponsePort CallEAEPort Correlation types <DescriptiveName> Should be named based on the logical name of what is being used to correlate.  PurchaseOrderNumber Correlation sets <DescriptiveName> Should be named based on the corresponding correlation type.  If there is more than one, it should be named to reflect its specific purpose within the orchestration.   PurchaseOrderNumber Orchestration parameters <DescriptiveName> Should be named to match the caller’s names for the corresponding variables where appropriate.

    Read the article

  • SQLAuthority News – Download Microsoft SQL Server 2012 RTM Now

    - by pinaldave
    SQL Server 2012 enables a cloud-ready information platform that will help organizations unlock breakthrough insights across the organization as well as quickly build solutions and extend data across on-premises and public cloud backed by capabilities for mission critical confidence: Deliver required uptime and data protection with AlwaysOn Gain breakthrough & predictable performance with ColumnStore Index Help enable security and compliance with new User-defined Roles and Default Schema for Groups Enable rapid data discovery for deeper insights across the organization with ColumnStore Index Ensure more credible, consistent data with SSIS improvements, a Master Data Services add-in for Excel, and new Data Quality Services Optimize IT and developer productivity across server and cloud with Data-tier Application Component (DAC) parity with SQL Azure and SQL Server Data Tools for a unified dev experience across database, BI, and cloud functions Download SQL Server 2012 RTM Download Microsoft SQL Server 2012 Feature Pack Download SQL Server Data Tools Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • What is a name for a job where you do system analysis, project management and data diagramming?

    - by David Archer
    In the last 4 months I've been able to manage a team and step away from the coding for a bit. I've been planning the system in full (both System Analysis and project managing, alongside action and data diagramming) writing the technical documentation, the code's architecture, keeping track of the other guys doing the actual coding, QA, bug reports and dealing with clients. I had to take two days' training on node.js just to see if it would be suitable for a project we were considering. Is there a name for this job? Project Manager and Systems Architect don't quite seem to have the same stuff, and IT manager seems way off. I only want to know so that I can get some qualification towards it and try to move into this kind of work full-time.

    Read the article

  • What resources will help me understand the data model for QC 10.0 in order to write my SQL queries?

    - by srihari
    I am a fresher in Quality Center 10.0 HP software testing tool. As per my understanding in order to generate reports from QC and to troubleshoot the scenarios, we need to write SQL queries in the QC back end database. In my case it is SQL db. I downloaded the database reference help file but I could not understand from where I can start. It just gave the table name and its information. For a starter like me are there any online tutorials or helpful websites,hands on exercises,scenario's where I can better understand how to write queries for the QC data model? I am very confident about the SQL coding itself, what I want to know is how to query on the QC database tables based on the scenarios that occur in QC tool. Please suggest. Thanks, Srihari

    Read the article

  • How to implement jquery and Mootools together ?

    - by Avi Kumar Manku
    I am developing a website in which I am implementing two slider for images gallery using one with jQuery and one with moottools. But there is problem in implementing these because when I use both together the jQuery slider doesn't works where mootools slider works. jQuery slider works in case where I remove mootools. What should I do to implement both sliders together? Any suggestions will be helpful. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Tresmode | Footwear &amp; Accessories</title> <script type="text/javascript" src="js/jquery-1.5.min.js"></script> <script src="js/jquery.easing.1.3.js" type="text/javascript"></script> <script src="js/jquery.slideviewer.1.2.js" type="text/javascript"></script> <!-- Syntax hl --> <script src="js/jquery.syntax.min.js" type="text/javascript" charset="utf-8"></script> <script type="text/javascript"> $(window).bind("load", function() { $("div#mygaltop").slideView({toolTip: true, ttOpacity: 0.5}); $("div#mygalone").slideView(); //if leaved blank performs the default kind of animation (easeInOutExpo, 750) $("div#mygaltwo").slideView({ easeFunc: "easeInOutBounce", easeTime: 2200, toolTip: true }); $("div#mygalthree").slideView({ easeFunc: "easeInOutSine", easeTime: 100, uiBefore: true, ttOpacity: 0.5, toolTip: true }); }); $(function(){ $.syntax({root: 'http://www.gcmingati.net/wordpress/wp-content/themes/giancarlo-mingati/js/jquery-syntax/'}); }); </script> <link href="css/style.css" rel="stylesheet" type="text/css" /> <link href="css/product.css" rel="stylesheet" type="text/css" /> <link href="css/scroll.css" rel="stylesheet" type="text/css" /> <!--[if lte IE 8]> <link href="css/ieonly.css" rel="stylesheet" type="text/css" /> <![endif]--> <script language="javascript" type="text/javascript" src="js/mootools-1.2-core.js"></script> <script language="javascript" type="text/javascript" src="js/mootools-1.2-more.js"></script> <script language="javascript" type="text/javascript" src="js/SlideItMoo.js"></script> <script language="javascript" type="text/javascript"> window.addEvent('domready', function(){ /* thumbnails example , links only */ new SlideItMoo({itemsVisible:5, // the number of thumbnails that are visible currentElement: 0, // the current element. starts from 0. If you want to start the display with a specific thumbnail, change this thumbsContainer: 'thumbs', elementScrolled: 'thumb_container', overallContainer: 'gallery_container'}); /* thumbnails example , div containers */ new SlideItMoo({itemsVisible:5, // the number of thumbnails that are visible currentElement: 0, // the current element. starts from 0. If you want to start the display with a specific thumbnail, change this thumbsContainer: 'thumbs2', elementScrolled: 'thumb_container2', overallContainer: 'gallery_container2'}); /* banner rotator example */ new SlideItMoo({itemsVisible:1, // the number of thumbnails that are visible showControls:0, // show the next-previous buttons autoSlide:2500, // insert interval in milliseconds currentElement: 0, // the current element. starts from 0. If you want to start the display with a specific thumbnail, change this transition: Fx.Transitions.Bounce.easeOut, thumbsContainer: 'banners', elementScrolled: 'banner_container', overallContainer: 'banners_container'}); }); </script> </head> <body> <div id="landing"> <!-- landing page menu --> <div id="landing_menu"> <ul> <li><a class="active" href="#">SPECIALS</a></li> <li><a href="#">SHOP MEN'S</a></li> <li class="none"><a class="none" href="#">SHOP WOMEN'S</a></li> </ul> </div> <!-- landing page menu --> <!-- loading container menu --> <div id="container_part"> <div id="big_image_slider"> <!-- <img src="images/briteloves.png" alt="Britelove" /> --> <div id="mygaltop" class="svw"> <ul> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/briteloves.png" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/1.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/2.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/3.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/4.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/5.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/6.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/7.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/8.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/9.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/10.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/11.jpg" /></li> <li><img alt="Tresmode | Footwear &amp; Accessories" src="images/12.jpg" /></li> </ul> </div> </div> <div class="new_style_banner"><img src="images/new_styles.png" alt="new style" /></div> <div class="new_style_banner"><img src="images/ford-super-models.png" alt="ford super models" /></div> </div> <!--- loading container menu --> <!-- footer scrool ---> <div id="footer_scroll"> <!--thumbnails slideshow begin--> <div id="gallery_container"> <div id="thumb_container"> <div id="thumbs"> <a href="gallery/full/DC080302018.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/1.jpg"/></a> <a href="gallery/full/DC080302028.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/2.jpg" /></a> <a href="gallery/full/DC080302030.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/3.jpg"/></a> <a href="gallery/full/DC080302018.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/4.jpg" /></a> <a href="gallery/full/DC080302028.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/5.jpg" /></a> <a href="gallery/full/DC080302030.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/6.jpg"/></a> <a href="gallery/full/DC080302018.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/1.jpg"/></a> <a href="gallery/full/DC080302028.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/2.jpg" /></a> <a href="gallery/full/DC080302030.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/7.jpg"/></a> <a href="gallery/full/DC080302018.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/8.jpg" /></a> <a href="gallery/full/DC080302028.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/9.jpg" /></a> <a href="gallery/full/DC080302030.jpg" rel="lightbox[galerie]" target="_blank"><img src="gallery/thumb/10.jpg"/></a> </div> </div> </div> <!--thumbnails slideshow end--> </div> <!-- foooter scrooll --> </div> <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> var pageTracker = _gat._getTracker("UA-2064812-2"); pageTracker._initData(); pageTracker._trackPageview(); </script> </body> </html>

    Read the article

  • Securing data inside Azure SQL? Any good libraries or DIY?

    - by Sid
    Azure SQL doesn't support many of the encryption features found in SQL Server (Table and Column encryption). We need to store some sensitive information that needs to be encrypted and we've rolled our own using AesCryptoServiceProvider to encrypt/decrypt data to/from the database. This solves the immediate issue (no cleartext in db) but poses other problems like Key rotation (we have to roll our own code for this, walking through the db converting old cipher text into new cipher text) metadata mapping of which tables and which columns are encrypted. This is simple with it's a few but quickly gets out of hand ... So are there any libraries out there that do this well? Any other resources or design patterns I can be pointed to?

    Read the article

  • What's a good data structure solution for a scene manager in XNA?

    - by tunnuz
    Hello, I'm playing with XNA for a game project of myself, I had previous exposure to OpenGL and worked a bit with Ogre, so I'm trying to get the same concepts working on XNA. Specifically I'm trying to add to XNA a scene manager to handle hierarchical transforms, frustum (maybe even occlusion) culling and transparency object sorting. My plan was to build a tree scene manager to handle hierarchical transforms and lighting, and then use an Octree for frustum culling and object sorting. The problem is how to do geometry sorting to support transparencies correctly. I know that sorting is very expensive if done on a per-polygon basis, so expensive that it is not even managed by Ogre. But still images from Ogre look right. Any ideas on how to do it and which data structures to use and their capabilities? I know people around is using: Octrees Kd-trees (someone on GameDev forum said that these are far better than Octrees) BSP (which should handle per-polygon ordering but are very expensive) BVH (but just for frustum and occlusion culling) Thank you Tunnuz

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >