Search Results

Search found 3181 results on 128 pages for 'integrated authentiation'.

Page 119/128 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • How To: Using spatial data with Entity Framework and Connector/Net

    - by GABMARTINEZ
    One of the new features introduced in Entity Framework 5.0 is the incorporation of some new types of data within an Entity Data Model: the spatial data types. These types allow us to perform operations on coordinates values in an easier way. There's no need to add stored routines or functions for every operation among these geometry types, now the user can have the alternative to put this logic on his application or keep it in the database. In the new 6.7.4 version there's also this new feature incorporated to Connector/Net library so our users can start exploring it and could provide us some feedback or comments about this new functionality. Through this tutorial on how to create a Code First Entity Model with a geometry column, we'll show an example on using Geometry types and some common operations when using geometry types inside an application. Requirements: - Connector/Net 6.7.4 - Entity Framework 5.0 version - .NET Framework 4.5 version - Basic understanding on Entity Framework and C# language. - An installed and running instance of MySQL Server 5.5.x or 5.6.10 version- Visual Studio 2012. Step One: Create a new Console Application  Inside Visual Studio select File->New Project menu option and select the Console Application template. Also make sure the .Net 4.5 version is selected so the new features for EF 5.0 will work with the application. Step Two: Add the Entity Framework Package For adding the Entity Framework Package there is more than one option: the package manager console or the Manage Nuget Packages option dialog. If you want to open the Package Manager Console, go to the Tools Menu -> Library Package Manager -> Package Manager Console. On the Package Manager Console Type:Install-Package EntityFrameworkThis will add the reference to the project of the latest released No alpha version of Entity Framework. Step Three: Adding Entity class and DBContext We'll add a simple class that represents a table entity to save some places and its location using a DBGeometry column that will be mapped to a Geometry type in MySQL. After that some operations can be performed using this data. public class MyPlace { [Key] public int Id { get; set; } public string name { get; set; } public DbGeometry location { get; set; } } public class JourneyDb : DbContext { public DbSet<MyPlace> MyPlaces { get; set; } }  Also make sure to add the connection string to the App.Config file as in the example: <?xml version="1.0" encoding="utf-8"?> <configuration>   <configSections>     <!-- For more information on Entity Framework configuration, visit http://go.microsoft.com/fwlink/?LinkID=237468 -->     <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />   </configSections>   <startup>     <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />   </startup>   <connectionStrings>     <add name="JourneyDb" connectionString="server=localhost;userid=root;pwd=;database=journeydb" providerName="MySql.Data.MySqlClient"/>   </connectionStrings>   <entityFramework>     </entityFramework> </configuration> Note also that the <entityFramework> section is empty.Step Four: Adding some new records.On the Program.cs file add the following code for the Main method so the Database gets created and also some new data can be added to the new table. This code adds some records containing some determinate locations. After being added a distance function will be used to know how much distance has each location in reference to the Queens Village Station in New York. static void Main(string[] args)    {     using (JourneyDb cxt = new JourneyDb())      {        cxt.Database.Delete();        cxt.Database.Create();         cxt.MyPlaces.Add(new MyPlace()        {          name = "JFK INTERNATIONAL AIRPORT OF NEW YORK",          location = DbGeometry.FromText("POINT(40.644047 -73.782291)"),        });         cxt.MyPlaces.Add(new MyPlace()        {          name = "ALLEY POND PARK",          location = DbGeometry.FromText("POINT(40.745696 -73.742638)"),        });       cxt.MyPlaces.Add(new MyPlace()        {          name = "CUNNINGHAM PARK",          location = DbGeometry.FromText("POINT(40.735031 -73.768387)"),        });         cxt.MyPlaces.Add(new MyPlace()        {          name = "QUEENS VILLAGE STATION",          location = DbGeometry.FromText("POINT(40.717957 -73.736501)"),        });         cxt.SaveChanges();         var points = (from p in cxt.MyPlaces                      select new { p.name, p.location });        foreach (var item in points)       {         Console.WriteLine("Location " + item.name + " has a distance in Km from Queens Village Station " + DbGeometry.FromText("POINT(40.717957 -73.736501)").Distance(item.location) * 100);       }       Console.ReadKey();      }  }}Output : Location JFK INTERNATIONAL AIRPORT OF NEW YORK has a distance from Queens Village Station 8.69448802402959 Km. Location ALLEY POND PARK has a distance from Queens Village Station 2.84097675104912 Km. Location CUNNINGHAM PARK has a distance from Queens Village Station 3.61695793727275 Km. Location QUEENS VILLAGE STATION has a distance from Queens Village Station 0 Km. Conclusion:Adding spatial data to a table is easier than before when having Entity Framework 5.0. This new Entity Framework feature that handles spatial data columns within the Data layer has a lot of integrated functions and methods toease this type of tasks.Notes:This version of Connector/Net is just released as GA so is preatty much stable to be used on a ProductionEnvironment. Please send us your comments or questions using this blog or at the Forums where we keep answering any questions you have about Connector/Net and MySQL Server.A copy of this sample project can be downloaded here. This application does not include any library so you will haveto add them before running it. Happly MySQL/.Net Coding.

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • Visual Studio 2010 Productivity Tips and Tricks&ndash;Part 1: Extensions

    - by ToStringTheory
    I don’t know about you, but when it comes to development, I prefer my environment to be as free of clutter as possible.  It may surprise you to know that I have tried ReSharper, and did not like it, for the reason that I stated above.  In my opinion, it had too much clutter.  Don’t get me wrong, there were a couple of features that I did like about it (inversion of if blocks, code feedback), but for the most part, I actually felt that it was slowing me down. Introduction Another large factor besides intrusiveness/speed in my choice to dislike ReSharper would probably be that I have become comfortable with my current setup and extensions.  I believe I have a good collection, and am quite happy with what I can accomplish in a short amount of time.  I figured that I would share some of my tips/findings regarding Visual Studio productivity here, and see what you had to say. The first section of things that I would like to cover, are Visual Studio Extensions.  In case you have been living under a rock for the past several years, Extensions are available under the Tools menu in Visual Studio: The extension manager enables integrated access to the Microsoft Visual Studio Gallery online with access to a few thousand different extensions.  I have tried many extensions, but for reasons of lack reliability, usability, or features, have uninstalled almost all of them.  However, I have come across several that I find I can not do without anymore: NuGet Package Manager (Microsoft) Perspectives (Adam Driscoll) Productivity Power Tools (Microsoft) Web Essentials (Mads Kristensen) Extensions NuGet Package Manager To be honest, I debated on whether or not to put this in here.  Most people seem to have it, however, there was a time when I didn’t, and was always confused when blogs/posts would say to right click and “Add Package Reference…” which with one of the latest updates is now “Manage NuGet Packages”.  So, if you haven’t downloaded the NuGet Package Manager yet, or don’t know what it is, I would highly suggest downloading it now! Features Simply put, the NuGet Package Manager gives you a GUI and command line to access different libraries that have been uploaded to NuGet. Some of its features include: Ability to search NuGet for packages via the GUI, with information in the detail bar on the right. Quick access to see what packages are in a solution, and what packages have updates available, with easy 1-click updating. If you download a package that requires references to work on other NuGet packages, they will be downloaded and referenced automatically. Productivity Tip If you use any type of source control in Visual Studio as well as using NuGet packages, be sure to right-click on the solution and click "Enable NuGet Package Restore". What this does is add a NuGet package to the solution so that it will be checked in along side your solution, as well as automatically grab packages from NuGet on build if needed. This is an extremely simple system to use to manage your package references, instead of having to manually go into TFS and add the Packages folder. Perspectives I can't stand developing with just one monitor. Especially if it comes to debugging. The great thing about Visual Studio 2010, is that all of the panels and windows are floatable, and can dock to other screens. The only bad thing is, I don't use the same toolset with everything that I am doing. By this, I mean that I don't use all of the same windows for debugging a web application, as I do for coding a WPF application. Only thing is, Visual Studio doesn't save the screen positions for all of the undocked windows. So, I got curious one day and decided to check and see if there was an extension to help out. This is where I found Perspectives. Features Perspectives gives you the ability to configure window positions across any or your monitors, and then to save the positions in a profile. Perspectives offers a Panel to manage different presets/favorites, and a toolbar to add to the toolbars at the top of Visual Studio. Ability to 'Favorite' a profile to add it to the perspectives toolbar. Productivity Tip Take the time to setup profiles for each of your scenarios - debugging web/winforms/xaml, coding, maintenance, etc. Try to remember to use the profiles for a few days, and at the end of a week, you may find that your productivity was never better. Productivity Power Tools Ah, the Productivity Power Tools... Quite possibly one of my most used extensions, if not my most used. The tool pack gives you a variety of enhancements ranging from key shortcuts, interface tweaks, and completely new features to Visual Studio 2010. Features I don't want to bore you with all of the features here, so here are my favorite: Quick Find - Unobtrusive search box in upper-right corner of the code window. Great for searching in general, especially in a file. Solution Navigator - The 'Solution Explorer' on steroids. Easy to search for files, see defined members/properties/methods in files, and my favorite feature is the 'set as root' option. Updated 'Add Reference...' Dialog - This is probably my favorite enhancement period... The 'Add Reference...' dialog redone in a manner that resembles the Extension/Package managers. I especially love the ability to search through all of the references. "Ctrl - Click" for Definition - I am still getting used to this as I usually try to use my keyboard for everything, but I love the ability to hold Ctrl and turn property/methods/variables into hyperlinks, that you click on to see their definitions. Great for travelling down a rabbit hole in an application to research problems. While there are other commands/utilities, I find these to be the ones that I lean on the most for the usefulness. Web Essentials If you have do any type of web development in ASP .Net, ASP .Net MVC, even HTML, I highly suggest grabbing the Web Essentials right NOW! This extension alone is great for productivity in web development, and greatly decreases my development time on new features. Features Some of its best features include: CSS Previews - I say 'previews' because of the multiple kinds of previews in CSS that you get font-family, color, background/background-image previews. This is great for just tweaking UI slightly in different ways and seeing how they look in the CSS window at a glance. Live Preview - One word - awesome! This goes well with my multi-monitor setup. I put the site on one monitor in a Live Preview panel, and then as I make changes to CSS/cshtml/aspx/html, the preview window will update with each save/build automatically. For CSS, you can even turn on live-update, so as you are tweaking CSS, the style changes in real time. Great for tweaking colors or font-sizes. Outlining - Small, but I like to be able to collapse regions/declarations that are in the way of new work, or are just distracting. Commenting Shortcuts - I don't know why it wasn't included by default, but it is nice to have the key shortcuts for commenting working in the CSS editor as well. Productivity Tip When working on a site, hit CTRL-ALT-ENTER to launch the Live Preview window. Dock it to another monitor. When you make changes to the document/css, just save and glance at the other monitor. No need to alt tab, then alt tab before continuing editing. Conclusion These extensions are only the most useful and least intrusive - ones that I use every day. The great thing about Visual Studio 2010 is the extensibility options that it gives developers to utilize. Have an extension that you use that isn't intrusive, but isn't listed here? Please, feel free to comment. I love trying new things, and am always looking for new additions to my toolset of the most useful. Finally, please keep an eye out for Part 2 on key shortcuts in Visual Studio. Also, if you are visiting my site (http://tostringtheory.com || http://geekswithblogs.net/tostringtheory) from an actual browser and not a feed, please let me know what you think of the new styling!

    Read the article

  • Kubuntu 12.04 - Touchpad and keyboard stopped working at random

    - by StepTNT
    As in the title, I've got this problem with my Kubuntu 12.04. At first I've thought that the whole system was hung, but it happened again 5 minutes ago and, while the keyboard and the touchpad stopped working, the music was still playing, so I guess that's just an "input" problem, because the system was still working! Any solution? Is there some data that you need to know about my setup? EDIT: Added my lshw outout description: Notebook product: N53SV () vendor: ASUSTeK Computer Inc. version: 1.0 serial: B2N0AS17695408A width: 64 bits capabilities: smbios-2.6 dmi-2.6 vsyscall32 configuration: boot=normal chassis=notebook family=N uuid=8083F2DA-A43E-E081-3F3F-BCAEC55F8AA1 *-core description: Motherboard product: N53SV vendor: ASUSTeK Computer Inc. physical id: 0 version: 1.0 serial: BSN12345678901234567 slot: MIDDLE *-firmware description: BIOS vendor: American Megatrends Inc. physical id: 0 version: N53SV.214 date: 08/10/2011 size: 64KiB capacity: 2496KiB capabilities: pci upgrade shadowing cdboot bootselect edd int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int9keyboard int14serial int17printer acpi usb smartbattery biosbootspecification *-cpu description: CPU product: Intel(R) Core(TM) i7-2630QM CPU @ 2.00GHz vendor: Intel Corp. physical id: 4 bus info: cpu@0 version: Intel(R) Core(TM) i7-2630QM CPU @ 2.00GHz serial: To Be Filled By O.E.M. slot: CPU 1 size: 800MHz capacity: 4GHz width: 64 bits clock: 100MHz capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx lahf_lm ida arat epb xsaveopt pln pts tpr_shadow vnmi flexpriority ept vpid cpufreq configuration: cores=4 enabledcores=1 threads=2 *-cache description: L1 cache physical id: 5 slot: L1-Cache size: 32KiB capacity: 32KiB capabilities: internal write-back instruction *-memory description: System Memory physical id: 40 slot: System board or motherboard size: 10GiB *-bank:0 description: SODIMM DDR3 Synchronous 1333 MHz (0,8 ns) product: 99U5428-040.A00LF vendor: Kingston physical id: 0 serial: 103C28C3 slot: ChannelA-DIMM0 size: 4GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:1 description: SODIMM DDR3 Synchronous 1333 MHz (0,8 ns) product: HMT325S6BFR8C-H9 vendor: Hynix/Hyundai physical id: 1 serial: 58383D1F slot: ChannelA-DIMM1 size: 2GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:2 description: SODIMM DDR3 Synchronous 1333 MHz (0,8 ns) product: HMT325S6BFR8C-H9 vendor: Hynix/Hyundai physical id: 2 serial: 58183D19 slot: ChannelB-DIMM0 size: 2GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:3 description: SODIMM DDR3 Synchronous 1333 MHz (0,8 ns) product: HMT325S6BFR8C-H9 vendor: Hynix/Hyundai physical id: 3 serial: 58183C8F slot: ChannelB-DIMM1 size: 2GiB width: 64 bits clock: 1333MHz (0.8ns) *-pci description: Host bridge product: 2nd Generation Core Processor Family DRAM Controller vendor: Intel Corporation physical id: 100 bus info: pci@0000:00:00.0 version: 09 width: 32 bits clock: 33MHz configuration: driver=agpgart-intel resources: irq:0 *-pci:0 description: PCI bridge product: Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port vendor: Intel Corporation physical id: 1 bus info: pci@0000:00:01.0 version: 09 width: 32 bits clock: 33MHz capabilities: pci pm msi pciexpress normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:40 ioport:d000(size=4096) memory:db000000-dc0fffff ioport:c0000000(size=301989888) *-generic UNCLAIMED description: Unassigned class product: Illegal Vendor ID vendor: Illegal Vendor ID physical id: 0 bus info: pci@0000:01:00.0 version: ff width: 32 bits clock: 66MHz capabilities: bus_master vga_palette cap_list configuration: latency=255 maxlatency=255 mingnt=255 resources: memory:db000000-dbffffff memory:c0000000-cfffffff memory:d0000000-d1ffffff ioport:d000(size=128) memory:dc000000-dc07ffff *-display description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:47 memory:dc400000-dc7fffff memory:b0000000-bfffffff ioport:e000(size=64) *-communication description: Communication controller product: 6 Series/C200 Series Chipset Family MEI Controller #1 vendor: Intel Corporation physical id: 16 bus info: pci@0000:00:16.0 version: 04 width: 64 bits clock: 33MHz capabilities: pm msi bus_master cap_list configuration: driver=mei latency=0 resources: irq:48 memory:df00b000-df00b00f *-usb:0 description: USB controller product: 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 vendor: Intel Corporation physical id: 1a bus info: pci@0000:00:1a.0 version: 05 width: 32 bits clock: 33MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=0 resources: irq:16 memory:df008000-df0083ff *-multimedia description: Audio device product: 6 Series/C200 Series Chipset Family High Definition Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 05 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=snd_hda_intel latency=0 resources: irq:49 memory:df000000-df003fff *-pci:1 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 1 vendor: Intel Corporation physical id: 1c bus info: pci@0000:00:1c.0 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:41 ioport:c000(size=4096) memory:de600000-deffffff ioport:d4200000(size=10485760) *-pci:2 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 2 vendor: Intel Corporation physical id: 1c.1 bus info: pci@0000:00:1c.1 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:42 ioport:b000(size=4096) memory:ddc00000-de5fffff ioport:d3700000(size=10485760) *-network description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 01 serial: 48:5d:60:f2:2c:fd width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-24-generic firmware=N/A ip=192.168.1.6 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:ddc00000-ddc0ffff *-pci:3 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 4 vendor: Intel Corporation physical id: 1c.3 bus info: pci@0000:00:1c.3 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:43 ioport:a000(size=4096) memory:dd200000-ddbfffff ioport:d2c00000(size=10485760) *-usb description: USB controller product: FL1000G USB 3.0 Host Controller vendor: Fresco Logic physical id: 0 bus info: pci@0000:04:00.0 version: 04 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress xhci bus_master cap_list configuration: driver=xhci_hcd latency=0 resources: irq:19 memory:dd200000-dd20ffff *-pci:4 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 6 vendor: Intel Corporation physical id: 1c.5 bus info: pci@0000:00:1c.5 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:44 ioport:9000(size=4096) memory:dc800000-dd1fffff ioport:d2100000(size=10485760) *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 06 serial: bc:ae:c5:5f:8a:a1 size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=rtl_nic/rtl8168e-2.fw latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:46 ioport:9000(size=256) memory:d2104000-d2104fff memory:d2100000-d2103fff *-usb:1 description: USB controller product: 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 vendor: Intel Corporation physical id: 1d bus info: pci@0000:00:1d.0 version: 05 width: 32 bits clock: 33MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=0 resources: irq:23 memory:df007000-df0073ff *-isa description: ISA bridge product: HM65 Express Chipset Family LPC Controller vendor: Intel Corporation physical id: 1f bus info: pci@0000:00:1f.0 version: 05 width: 32 bits clock: 33MHz capabilities: isa bus_master cap_list configuration: latency=0 *-storage description: SATA controller product: 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller vendor: Intel Corporation physical id: 1f.2 bus info: pci@0000:00:1f.2 logical name: scsi0 logical name: scsi2 version: 05 width: 32 bits clock: 66MHz capabilities: storage msi pm ahci_1.0 bus_master cap_list emulated configuration: driver=ahci latency=0 resources: irq:45 ioport:e0b0(size=8) ioport:e0a0(size=4) ioport:e090(size=8) ioport:e080(size=4) ioport:e060(size=32) memory:df006000-df0067ff *-disk description: ATA Disk product: ST9750420AS vendor: Seagate physical id: 0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: 0002 serial: 5WS0A7QR size: 698GiB (750GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=e0c5913d *-volume:0 description: Windows FAT volume vendor: MSDOS5.0 physical id: 1 bus info: scsi@0:0.0.0,1 logical name: /dev/sda1 version: FAT32 serial: 4ce5-3acb size: 3004MiB capacity: 3004MiB capabilities: primary fat initialized configuration: FATs=2 filesystem=fat *-volume:1 description: EXT4 volume vendor: Linux physical id: 2 bus info: scsi@0:0.0.0,2 logical name: /dev/sda2 logical name: / version: 1.0 serial: c198cc2a-d86a-4460-a4d5-3fc0b21e439c size: 28GiB capacity: 28GiB capabilities: primary journaled extended_attributes large_files huge_files dir_nlink recover extents ext4 ext2 initialized configuration: created=2012-03-15 16:53:54 filesystem=ext4 lastmountpoint=/ modified=2012-05-02 18:52:04 mount.fstype=ext4 mount.options=rw,relatime,errors=remount-ro,user_xattr,acl,barrier=1,data=ordered mounted=2012-05-09 19:06:01 state=mounted *-volume:2 description: Windows NTFS volume physical id: 3 bus info: scsi@0:0.0.0,3 logical name: /dev/sda3 version: 3.1 serial: 4c1cdebc-ec09-2947-a3b5-c1f9f1cddc1c size: 152GiB capacity: 152GiB capabilities: primary bootable ntfs initialized configuration: clustersize=4096 created=2011-02-22 16:02:47 filesystem=ntfs label=OS state=clean *-volume:3 description: Extended partition physical id: 4 bus info: scsi@0:0.0.0,4 logical name: /dev/sda4 size: 514GiB capacity: 514GiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume:0 description: Linux swap / Solaris partition physical id: 5 logical name: /dev/sda5 capacity: 10GiB capabilities: nofs *-logicalvolume:1 description: HPFS/NTFS partition physical id: 6 logical name: /dev/sda6 capacity: 504GiB *-cdrom description: DVD-RAM writer product: BD-MLT UJ240AS vendor: MATSHITA physical id: 1 bus info: scsi@2:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/sr0 version: 1.00 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc *-serial UNCLAIMED description: SMBus product: 6 Series/C200 Series Chipset Family SMBus Controller vendor: Intel Corporation physical id: 1f.3 bus info: pci@0000:00:1f.3 version: 05 width: 64 bits clock: 33MHz configuration: latency=0 resources: memory:df005000-df0050ff ioport:e040(size=32)

    Read the article

  • Oracle Enterprise Manager 12c Ops Center Jump-Start for Partners

    - by Get_Specialized!
    Following the Normal 0 false false false EN-US X-NONE X-NONE Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} announcement at Oracle OpenWorld Tokyo, Partners can check out these resources to further learn about Oracle Enterprise Manager 12c Op Center and then use it to optimize your solution/services or offer new ones: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Product Documentation Oracle Technical Network Resources Online Learning Series for Partners in the OPN Enterprise Manager KnowledgeZone Whitepaper Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Making Infrastructure-as-a-Service in the Enterprise a Reality IDC report: Oracle Enterprise Manager 12c Embraces the Cloud with Integrated Lifecycle Management Follow-up webcast April 12th  Total Cloud Control for Systems Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Enterprise Manager Ops Center 12c is no extra charge and included in the support contract of Oracle Systems customers.To learn more see the Ops Center Everywhere Program And if you're not already a member, be sure and join the Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Enterprise Manager KnowledgeZone on the Oracle PartnerNetwork  Portal

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • CodePlex Daily Summary for Sunday, September 29, 2013

    CodePlex Daily Summary for Sunday, September 29, 2013Popular ReleasesAudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features -------- list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed ----- case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download end...Activity Viewer 2012: Activity Viewer 2012 V 5.0.0.3: Planning to add new features: 1. Import/Export rules 2. Tabular mode multi servers connections.Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...AcDown?????: AcDown????? v4.5: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.5 ???? AcPlay????????v3.5 ????????,???????????30% ?? ???????GoodManga.net???? ?? ?????????? ?? ??Acfun?????????? ??Bilibili??????????? ?????????flvcd???????? ??SfAcg????????????? ???????????? ???????????????? ????32...OfflineBrowser: Release v1.2: This release includes some multi-threading support, a better progress bar, more JavaScript fixes, and a help system. This release is also portable (can run with no issues from a flash drive).CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.0.0.34288 Release: This release of the CtrlAltStudio Viewer includes the following significant features: Stereoscopic 3D display support. Based on Firestorm viewer 4.4.2 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-0-0-34288-release Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of ...CrmSvcUtil Generate Attribute Constants: Generate Attribute Constants (1.0.5018.28159): Built against version 5.0.15 of the CRM SDK Fixed issue where constant for primary key attribute was being duplicated in all entity classes Added ability to override base class for entity classesC# Intellisense for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.CS-Script for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.SimpleExcelReportMaker: Serm 0.02: SourceCode and SampleMagick.NET: Magick.NET 6.8.7.001: Magick.NET linked with ImageMagick 6.8.7.0. Breaking changes: - ToBitmap method of MagickImage returns a png instead of a bmp. - Changed the value for full transparency from 255(Q8)/65535(Q16) to 0. - MagickColor now uses floats instead of Byte/UInt16.Media Companion: Media Companion MC3.578b: With the feedback received over the renaming of Movie Folders, and files, there has been some refinement done. As well as I would like to introduce Blu-Ray movie folder support, for Pre-Frodo and Frodo onwards versions of XBMC. To start with, Context menu option for renaming movies, now has three sub options: Movie & Folder, Movie only & Folder only. The option Manual Movie Rename needs to be selected from Movie Preferences, but the autoscrape boxes do not need to be selected. Blu Ray Fo...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen v2.1.3.api release: This is for the brave at heart, this is the maint release to update to the new movie api. please send feedback on fix requests.FFXIV Crafting Simulator: Crafting Simulator 2.3: - Major refactoring of the code behind. - Added a current durability and a current CP textbox.DNN CMS Platform: 07.01.02: Major HighlightsAdded the ability to manage the Vanity URL prefix Added the ability to filter members in the member directory by role Fixed issue where the user could inadvertently click the login button multiple times Fixed issues where core classes could not be used in out of process cache provider Fixed issue where profile visibility submenu was not displayed correctly Fixed issue where the member directory was broken when Convert URL to lowercase setting was enabled Fixed issu...Rawr: Rawr 5.4.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Sample MVC4 EF Codefirst Architecture: RazMVCWebApp ver 1.1: Signal R sample is added.CODE Framework: 4.0.30923.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.JayData -The unified data access library for JavaScript: JayData 1.3.2 - Indian Summer Edition: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like WebAPI, OData, MongoDB, WebSQL, SQLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with KendoUI, Angular.js, Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video KendoUI examples: JayData example site Examples for map integration JayData example site What's new in JayData 1.3.2 - Indian Summer Edition For detai...ZXing.Net: ZXing.Net 0.12.0.0: sync with rev. 2892 of the java version new PDF417 decoder improved Aztec decoder global speed improvements direct Kinect support for ColorImageFrame better Structured Append support many other small bug fixes and improvementsNew ProjectsCACHEDB: CLIENT-DATABASE || CLIENT_CACHEDB-DATABASEClassic WiX Burn Theme: A WiX Burn theme inspired by the classic WiX wizard user interface.CryptStr.Fody: A post-build weaver that encrypts literal strings in your .NET assemblies without breaking ClickOnce.Easy Code: A setting framework.EduSoft: This is a school eg.GameStuff: GameStuff is a library of Physics and Geometrics concepts for video game. Nekora Test Project: Nekora test projectPopCorn Console Game: Simple console gameRadioController: This project started from people installing Tablets in Mustangs. You would typically loose most control of the radio. This projects brings that back!Random searcher i pochodne: Wyszukiwarka plików multimedialnych i czego dusza zapragnie.SporkRandom: A .NET (C#, Visual Basic) interface for the true random number generator service of random.org

    Read the article

  • Building an OpenStack Cloud for Solaris Engineering, Part 1

    - by Dave Miner
    One of the signature features of the recently-released Solaris 11.2 is the OpenStack cloud computing platform.  Over on the Solaris OpenStack blog the development team is publishing lots of details about our version of OpenStack Havana as well as some tips on specific features, and I highly recommend reading those to get a feel for how we've leveraged Solaris's features to build a top-notch cloud platform.  In this and some subsequent posts I'm going to look at it from a different perspective, which is that of the enterprise administrator deploying an OpenStack cloud.  But this won't be just a theoretical perspective: I've spent the past several months putting together a deployment of OpenStack for use by the Solaris engineering organization, and now that it's in production we'll share how we built it and what we've learned so far.In the Solaris engineering organization we've long had dedicated lab systems dispersed among our various sites and a home-grown reservation tool for developers to reserve those systems; various teams also have private systems for specific testing purposes.  But as a developer, it can still be difficult to find systems you need, especially since most Solaris changes require testing on both SPARC and x86 systems before they can be integrated.  We've added virtual resources over the years as well in the form of LDOMs and zones (both traditional non-global zones and the new kernel zones).  Fundamentally, though, these were all still deployed in the same model: our overworked lab administrators set up pre-configured resources and we then reserve them.  Sounds like pretty much every traditional IT shop, right?  Which means that there's a lot of opportunity for efficiencies from greater use of virtualization and the self-service style of cloud computing.  As we were well into development of OpenStack on Solaris, I was recruited to figure out how we could deploy it to both provide more (and more efficient) development and test resources for the organization as well as a test environment for Solaris OpenStack.At this point, let's acknowledge one fact: deploying OpenStack is hard.  It's a very complex piece of software that makes use of sophisticated networking features and runs as a ton of service daemons with myriad configuration files.  The web UI, Horizon, doesn't often do a good job of providing detailed errors.  Even the command-line clients are not as transparent as you'd like, though at least you can turn on verbose and debug messaging and often get some clues as to what to look for, though it helps if you're good at reading JSON structure dumps.  I'd already learned all of this in doing a single-system Grizzly-on-Linux deployment for the development team to reference when they were getting started so I at least came to this job with some appreciation for what I was taking on.  The good news is that both we and the community have done a lot to make deployment much easier in the last year; probably the easiest approach is to download the OpenStack Unified Archive from OTN to get your hands on a single-system demonstration environment.  I highly recommend getting started with something like it to get some understanding of OpenStack before you embark on a more complex deployment.  For some situations, it may in fact be all you ever need.  If so, you don't need to read the rest of this series of posts!In the Solaris engineering case, we need a lot more horsepower than a single-system cloud can provide.  We need to support both SPARC and x86 VM's, and we have hundreds of developers so we want to be able to scale to support thousands of VM's, though we're going to build to that scale over time, not immediately.  We also want to be able to test both Solaris 11 updates and a release such as Solaris 12 that's under development so that we can work out any upgrade issues before release.  One thing we don't have is a requirement for extremely high availability, at least at this point.  We surely don't want a lot of down time, but we can tolerate scheduled outages and brief (as in an hour or so) unscheduled ones.  Thus I didn't need to spend effort on trying to get high availability everywhere.The diagram below shows our initial deployment design.  We're using six systems, most of which are x86 because we had more of those immediately available.  All of those systems reside on a management VLAN and are connected with a two-way link aggregation of 1 Gb links (we don't yet have 10 Gb switching infrastructure in place, but we'll get there).  A separate VLAN provides "public" (as in connected to the rest of Oracle's internal network) addresses, while we use VxLANs for the tenant networks. One system is more or less the control node, providing the MySQL database, RabbitMQ, Keystone, and the Nova API and scheduler as well as the Horizon console.  We're curious how this will perform and I anticipate eventually splitting at least the database off to another node to help simplify upgrades, but at our present scale this works.I had a couple of systems with lots of disk space, one of which was already configured as the Automated Installation server for the lab, so it's just providing the Glance image repository for OpenStack.  The other node with lots of disks provides Cinder block storage service; we also have a ZFS Storage Appliance that will help back-end Cinder in the near future, I just haven't had time to get it configured in yet.There's a separate system for Neutron, which is our Elastic Virtual Switch controller and handles the routing and NAT for the guests.  We don't have any need for firewalling in this deployment so we're not doing so.  We presently have only two tenants defined, one for the Solaris organization that's funding this cloud, and a separate tenant for other Oracle organizations that would like to try out OpenStack on Solaris.  Each tenant has one VxLAN defined initially, but we can of course add more.  Right now we have just a single /24 network for the floating IP's, once we get demand up to where we need more then we'll add them.Finally, we have started with just two compute nodes; one is an x86 system, the other is an LDOM on a SPARC T5-2.  We'll be adding more when demand reaches the level where we need them, but as we're still ramping up the user base it's less work to manage fewer nodes until then.My next post will delve into the details of building this OpenStack cloud's infrastructure, including how we're using various Solaris features such as Automated Installation, IPS packaging, SMF, and Puppet to deploy and manage the nodes.  After that we'll get into the specifics of configuring and running OpenStack itself.

    Read the article

  • Wireless networks are not detected at start up in Ubuntu 12.04

    - by Kanhaiya Mishra
    I have recently (three four days ago) installed Ubuntu 12.04 via windows installer i.e. wubi.exe. After the installation completed wireless and Ethernet were both working well. But after restart wireless networks didn't show up while in the network manager both networking and wireless were enabled. Though sometimes after boot it did show the networks available but very rarely. So I went through various posts regarding wireless issues in Ubuntu 12.04 and tried so many things but ended up in nothing satisfactory. I have Broadcom 4313 LAN network controller and brcmsmac driver. Then relying on some suggestions I tried to install bcm-wl driver but couldn't install due to some error in jockeyl.log file. Then i tried fresh installation of the same driver but still could resolve the startup issues with wireless. Then again I reinstalled Ubuntu inside windows using wubi installer. This time again same problem occurred after boot. But this time I successfully installed wl driver before disturbing file-system files of Ubuntu. But again the same issue. This time I noticed some new things: If I inserted Ethernet/LAN cable before startup then wireless networks are available and of course LAN(wired) networks also work. but if i don't plug in cable before startup and then plug it after startup then it didn't detect Ethernet network neither wireless. So I haven't noticed it before that LAN along with wifi also doesn't work after startup. But if i suspend the session and make it sleep and again login then it worked. I tried it every time that WLAN worked perfectly. But still i m unable to resolve that startup problem. Each time i boot first I have to suspend it once then only networks are available. It irritates me each time i reboot/boot my lappy. So please help out of this problem. Any ideas/help regarding this issue would be highly appreciated. Some of the commands that i run gave following results: # lspci 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 12) 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 12) 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) 00:1a.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 06) 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 06) 00:1c.5 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 6 (rev 06) 00:1d.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a6) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 06) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 6 port SATA AHCI Controller (rev 06) 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 06) 00:1f.6 Signal processing controller: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem (rev 06) 03:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) 04:00.0 Ethernet controller: Atheros Communications Inc. AR8152 v1.1 Fast Ethernet (rev c1) ff:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 02) ff:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 02) ff:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 02) ff:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 02) ff:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 02) ff:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 02) # sudo lshw -C network *-network description: Wireless interface product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: eth1 version: 01 serial: 70:f1:a1:49:b6:ab width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 ip=192.168.1.7 latency=0 multicast=yes wireless=IEEE 802.11 resources: irq:17 memory:f0500000-f0503fff *-network description: Ethernet interface product: AR8152 v1.1 Fast Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 logical name: eth0 version: c1 serial: b8:ac:6f:6b:f7:4a capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI firmware=N/A latency=0 link=no multicast=yes port=twisted pair resources: irq:44 memory:f0400000-f043ffff ioport:2000(size=128) # lsmod | grep wl wl 2568210 0 lib80211 14381 2 lib80211_crypt_tkip,wl # sudo iwlist eth1 scanning eth1 Scan completed : Cell 01 - Address: 30:46:9A:85:DA:9A ESSID:"BH DASHIR 2" Mode:Managed Frequency:2.462 GHz (Channel 11) Quality:4/5 Signal level:-60 dBm Noise level:-98 dBm IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : CCMP Pairwise Ciphers (1) : CCMP Authentication Suites (1) : PSK IE: Unknown: DD7F0050F204104A00011010440001021041000100103B000103104700109AFE7D908F8E2D381860668BA2E8D8771021000D4E4554474541522C20496E632E10230009574752363134763130102400095747523631347631301042000538333235381054000800060050F204000110110009574752363134763130100800020084 Encryption key:on Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s; 6 Mb/s; 9 Mb/s 12 Mb/s; 48 Mb/s Cell 02 - Address: C0:3F:0E:EB:45:14 ESSID:"BH DASHIR 3" Mode:Managed Frequency:2.462 GHz (Channel 11) Quality:2/5 Signal level:-71 dBm Noise level:-98 dBm IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : CCMP Pairwise Ciphers (1) : CCMP Authentication Suites (1) : PSK IE: Unknown: DD7F0050F204104A00011010440001021041000100103B00010310470010F3C9BBE499D140540F530E7EBEDE2F671021000D4E4554474541522C20496E632E10230009574752363134763130102400095747523631347631301042000538333235381054000800060050F204000110110009574752363134763130100800020084 Encryption key:on Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s; 6 Mb/s; 9 Mb/s 12 Mb/s; 48 Mb/s Cell 03 - Address: A0:21:B7:A8:2F:C0 ESSID:"BH DASHIR 4" Mode:Managed Frequency:2.422 GHz (Channel 3) Quality:1/5 Signal level:-86 dBm Noise level:-98 dBm IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : CCMP Pairwise Ciphers (1) : CCMP Authentication Suites (1) : PSK IE: Unknown: DD8B0050F204104A0001101044000102103B0001031047001000000000000010000000A021B7A82FC01021000D4E6574676561722C20496E632E10230009574E523130303076321024000456324831104200046E6F6E651054000800060050F20400011011001B574E5231303030763228576972656C6573732041502D322E344729100800020086103C000103 Encryption key:on Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s; 36 Mb/s 48 Mb/s; 54 Mb/s

    Read the article

  • Using IIS Logs for Performance Testing with Visual Studio

    - by Tarun Arora
    In this blog post I’ll show you how you can play back the IIS Logs in Visual Studio to automatically generate the web performance tests. You can also download the sample solution I am demo-ing in the blog post. Introduction Performance testing is as important for new websites as it is for evolving websites. If you already have your website running in production you could mine the information available in IIS logs to analyse the dense zones (most used pages) and performance test those pages rather than wasting time testing & tuning the least used pages in your application. What are IIS Logs To help with server use and analysis, IIS is integrated with several types of log files. These log file formats provide information on a range of websites and specific statistics, including Internet Protocol (IP) addresses, user information and site visits as well as dates, times and queries. If you are using IIS 7 and above you will find the log files in the following directory C:\Interpub\Logs\ Walkthrough 1. Download and Install Log Parser from the Microsoft download Centre. You should see the LogParser.dll in the install folder, the default install location is C:\Program Files (x86)\Log Parser 2.2. LogParser.dll gives us a library to query the iis log files programmatically. By the way if you haven’t used Log Parser in the past, it is a is a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows operating system such as the Event Log, the Registry, the file system, and Active Directory. More details… 2. Create a new test project in Visual Studio. Let’s call it IISLogsToWebPerfTestDemo.   3.  Delete the UnitTest1.cs class that gets created by default. Right click the solution and add a project of type class library, name it, IISLogsToWebPerfTestEngine. Delete the default class Program.cs that gets created with the project. 4. Under the IISLogsToWebPerfTestEngine project add a reference to Microsoft.VisualStudio.QualityTools.WebTestFramework – c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies\Microsoft.VisualStudio.QualityTools.WebTestFramework.dll LogParser also called MSUtil - c:\users\tarora\documents\visual studio 2010\Projects\IisLogsToWebPerfTest\IisLogsToWebPerfTestEngine\obj\Debug\Interop.MSUtil.dll 5. Right click IISLogsToWebPerfTestEngine project and add a new classes – IISLogReader.cs The IISLogReader class queries the iis logs using the log parser. using System; using System.Collections.Generic; using System.Text; using MSUtil; using LogQuery = MSUtil.LogQueryClassClass; using IISLogInputFormat = MSUtil.COMIISW3CInputContextClassClass; using LogRecordSet = MSUtil.ILogRecordset; using Microsoft.VisualStudio.TestTools.WebTesting; using System.Diagnostics; namespace IisLogsToWebPerfTestEngine { // By making use of log parser it is possible to query the iis log using select queries public class IISLogReader { private string _iisLogPath; public IISLogReader(string iisLogPath) { _iisLogPath = iisLogPath; } public IEnumerable<WebTestRequest> GetRequests() { LogQuery logQuery = new LogQuery(); IISLogInputFormat iisInputFormat = new IISLogInputFormat(); // currently these columns give us suffient information to construct the web test requests string query = @"SELECT s-ip, s-port, cs-method, cs-uri-stem, cs-uri-query FROM " + _iisLogPath; LogRecordSet recordSet = logQuery.Execute(query, iisInputFormat); // Apply a bit of transformation while (!recordSet.atEnd()) { ILogRecord record = recordSet.getRecord(); if (record.getValueEx("cs-method").ToString() == "GET") { string server = record.getValueEx("s-ip").ToString(); string path = record.getValueEx("cs-uri-stem").ToString(); string querystring = record.getValueEx("cs-uri-query").ToString(); StringBuilder urlBuilder = new StringBuilder(); urlBuilder.Append("http://"); urlBuilder.Append(server); urlBuilder.Append(path); if (!String.IsNullOrEmpty(querystring)) { urlBuilder.Append("?"); urlBuilder.Append(querystring); } // You could make substitutions by introducing parameterized web tests. WebTestRequest request = new WebTestRequest(urlBuilder.ToString()); Debug.WriteLine(request.UrlWithQueryString); yield return request; } recordSet.moveNext(); } Console.WriteLine(" That's it! Closing the reader"); recordSet.close(); } } }   6. Connect the dots by adding the project reference ‘IisLogsToWebPerfTestEngine’ to ‘IisLogsToWebPerfTest’. Right click the ‘IisLogsToWebPerfTest’ project and add a new class ‘WebTest1Coded.cs’ The WebTest1Coded.cs inherits from the WebTest class. By overriding the GetRequestMethod we can inject the log files to the IISLogReader class which uses Log parser to query the log file and extract the web requests to generate the web test request which is yielded back for play back when the test is run. namespace IisLogsToWebPerfTest { using System; using System.Collections.Generic; using System.Text; using Microsoft.VisualStudio.TestTools.WebTesting; using Microsoft.VisualStudio.TestTools.WebTesting.Rules; using IisLogsToWebPerfTestEngine; // This class is a coded web performance test implementation, that simply passes // the path of the iis logs to the IisLogReader class which does the heavy // lifting of reading the contents of the log file and converting them to tests. // You could have multiple such classes that inherit from WebTest and implement // GetRequestEnumerator Method and pass differnt log files for different tests. public class WebTest1Coded : WebTest { public WebTest1Coded() { this.PreAuthenticate = true; } public override IEnumerator<WebTestRequest> GetRequestEnumerator() { // substitute the highlighted path with the path of the iis log file IISLogReader reader = new IISLogReader(@"C:\Demo\iisLog1.log"); foreach (WebTestRequest request in reader.GetRequests()) { yield return request; } } } }   7. Its time to fire the test off and see the iis log playback as a web performance test. From the Test menu choose Test View Window you should be able to see the WebTest1Coded test show up. Highlight the test and press Run selection (you can also debug the test in case you face any failures during test execution). 8. Optionally you can create a Load Test by keeping ‘WebTest1Coded’ as the base test. Conclusion You have just helped your testing team, you now have become the coolest developer in your organization! Jokes apart, log parser and web performance test together allow you to save a lot of time by not having to worry about what to test or even worrying about how to record the test. If you haven’t already, download the solution from here. You can take this to the next level by using LogParser to extract the log files as part of an end of day batch to a database. See the usage trends by user this solution over a longer term and have your tests consume the web requests now stored in the database to generate the web performance tests. If you like the post, don’t forget to share … Keep RocKiNg!

    Read the article

  • Ubuntu 12.04 wireless (wifi) not working, can not upgrade to 12.10, touchpad gestures not working. What to do?

    - by Ritwik
    I installed ubuntu 12.04 LTS 3 days ago and since then wireless feature and touchpad gestures are not working. Tried everything on internet but still unsuccessful. I cant upgrade to ubuntu 12.10. These are the following comments I tried. Please help me. EDIT: just realized usb 3.0 is also not working. COMMAND lsb_release -r OUTPUT ----------------------------------------------------------------- Release: 12.04 ----------------------------------------------------------------- COMMAND lspci OUTPUT ------------------------------------------------------------------ 00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor DRAM Controller (rev 06) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06) 00:01.1 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x8 Controller (rev 06) 00:02.0 VGA compatible controller: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller (rev 06) 00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06) 00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 05) 00:16.0 Communication controller: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 (rev d5) 00:1c.1 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #2 (rev d5) 00:1c.2 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #3 (rev d5) 00:1d.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM86 Express LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05) 00:1f.3 SMBus: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller (rev 05) 07:00.0 3D controller: NVIDIA Corporation GF117M [GeForce 610M/710M / GT 620M/625M/630M/720M] (rev a1) 08:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 07) 09:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5229 PCI Express Card Reader (rev 01) 0f:00.0 Network controller: Qualcomm Atheros QCA9565 / AR9565 Wireless Network Adapter (rev 01) ------------------------------------------------------------------ COMMAND sudo apt-get install linux-backports-modules-wireless-lucid-generic OUTPUT ------------------------------------------------------------------- Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-backports-modules-wireless-lucid-generic ------------------------------------------------------------------- COMMAND cat /etc/lsb-release; uname -a OUTPUT ------------------------------------------------------------------- DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.5 LTS" Linux ritwik-PC 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:46:11 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux ------------------------------------------------------------------- COMMAND lspci -nnk | grep -iA2 net OUTPUT ------------------------------------------------------------------- 08:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 07) Subsystem: Hewlett-Packard Company Device [103c:225d] Kernel driver in use: r8169 -- 0f:00.0 Network controller [0280]: Qualcomm Atheros QCA9565 / AR9565 Wireless Network Adapter [168c:0036] (rev 01) Subsystem: Hewlett-Packard Company Device [103c:217f] ------------------------------------------------------------------- COMMAND lsusb OUTPUT ------------------------------------------------------------------- Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 8087:8008 Intel Corp. Bus 002 Device 002: ID 8087:8000 Intel Corp. ------------------------------------------------------------------- COMMAND iwconfig OUTPUT ------------------------------------------------------------------- lo no wireless extensions. eth0 no wireless extensions. ------------------------------------------------------------------- COMMAND rfkill list all OUTPUT ------------------------------------------------------------------- 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no 1: hp-bluetooth: Bluetooth Soft blocked: no Hard blocked: no ------------------------------------------------------------------- COMMAND lsmod OUTPUT ------------------------------------------------------------------- Module Size Used by snd_hda_codec_realtek 224215 1 bnep 18281 2 rfcomm 47604 0 bluetooth 180113 10 bnep,rfcomm parport_pc 32866 0 ppdev 17113 0 nls_iso8859_1 12713 1 nls_cp437 16991 1 vfat 17585 1 fat 61512 1 vfat snd_hda_intel 33719 3 snd_hda_codec 127706 2 snd_hda_codec_realtek,snd_hda_intel snd_hwdep 17764 1 snd_hda_codec snd_pcm 97275 2 snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61929 2 snd_seq_midi,snd_seq_midi_event nouveau 775039 0 joydev 17693 0 snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq ttm 76949 1 nouveau uvcvideo 72627 0 snd 79041 15 snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device videodev 98259 1 uvcvideo drm_kms_helper 46978 1 nouveau psmouse 98051 0 drm 241971 3 nouveau,ttm,drm_kms_helper i2c_algo_bit 13423 1 nouveau soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm v4l2_compat_ioctl32 17128 1 videodev hp_wmi 18092 0 serio_raw 13211 0 sparse_keymap 13890 1 hp_wmi mxm_wmi 13021 1 nouveau video 19651 1 nouveau wmi 19256 2 hp_wmi,mxm_wmi mac_hid 13253 0 lp 17799 0 parport 46562 3 parport_pc,ppdev,lp r8169 62190 0 ------------------------------------------------------------------- COMMAND sudo su modprobe -v ath9k OUTPUT ------------------------------------------------------------------- insmod /lib/modules/3.2.0-67-generic/kernel/net/wireless/cfg80211.ko insmod /lib/modules/3.2.0-67-generic/kernel/drivers/net/wireless/ath/ath.ko insmod /lib/modules/3.2.0-67-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_hw.ko insmod /lib/modules/3.2.0-67-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k_common.ko insmod /lib/modules/3.2.0-67-generic/kernel/net/mac80211/mac80211.ko insmod /lib/modules/3.2.0-67-generic/kernel/drivers/net/wireless/ath/ath9k/ath9k.ko ------------------------------------------------------------------- COMMAND do-release-upgrade OUTPUT ------------------------------------------------------------------- Err Upgrade tool signature 404 Not Found [IP: 91.189.88.149 80] Err Upgrade tool 404 Not Found [IP: 91.189.88.149 80] Fetched 0 B in 0s (0 B/s) WARNING:root:file 'quantal.tar.gz.gpg' missing Failed to fetch Fetching the upgrade failed. There may be a network problem. ------------------------------------------------------------------- COMMAND sudo modprobe ath9k dmesg | grep ath9k NO OUTPUT FOR THEM COMMAND dmesg | grep -e ath -e 80211 OUTPUT ------------------------------------------------------------------- [ 13.232372] type=1400 audit(1408867538.399:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=975 comm="apparmor_parser" [ 13.232615] type=1400 audit(1408867538.399:10): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=975 comm="apparmor_parser" [ 15.186599] ath3k: probe of 3-4:1.0 failed with error -110 [ 15.186635] usbcore: registered new interface driver ath3k [ 88.219329] cfg80211: Calling CRDA to update world regulatory domain [ 88.351665] cfg80211: World regulatory domain updated: [ 88.351667] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 88.351670] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 88.351671] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 88.351673] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 88.351674] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 88.351675] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) ------------------------------------------------------------------- COMMAND sudo apt-get install touchpad-indicator OUTPUT ------------------------------------------------------------------- Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: gir1.2-gconf-2.0 python-pyudev Suggested packages: python-qt4 python-pyside.qtcore The following NEW packages will be installed: gir1.2-gconf-2.0 python-pyudev touchpad-indicator 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded. Need to get 84.1 kB of archives. After this operation, 1,136 kB of additional disk space will be used. Do you want to continue [Y/n]? Y Get:1 http://ppa.launchpad.net/atareao/atareao/ubuntu/ precise/main touchpad-indicator all 0.9.3.12-1ubuntu1 [46.5 kB] Get:2 http://archive.ubuntu.com/ubuntu/ precise/main gir1.2-gconf-2.0 amd64 3.2.5-0ubuntu2 [7,098 B] Get:3 http://archive.ubuntu.com/ubuntu/ precise/main python-pyudev all 0.13-1 [30.5 kB] Fetched 84.1 kB in 2s (31.6 kB/s) Selecting previously unselected package gir1.2-gconf-2.0. (Reading database ... 169322 files and directories currently installed.) Unpacking gir1.2-gconf-2.0 (from .../gir1.2-gconf-2.0_3.2.5-0ubuntu2_amd64.deb) ... Selecting previously unselected package python-pyudev. Unpacking python-pyudev (from .../python-pyudev_0.13-1_all.deb) ... Selecting previously unselected package touchpad-indicator. Unpacking touchpad-indicator (from .../touchpad-indicator_0.9.3.12-1ubuntu1_all.deb) ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Processing triggers for software-center ... INFO:softwarecenter.db.update:no translation information in database needed Setting up gir1.2-gconf-2.0 (3.2.5-0ubuntu2) ... Setting up python-pyudev (0.13-1) ... Setting up touchpad-indicator (0.9.3.12-1ubuntu1) ... ------------------------------------------------------------------- Not able to find ( drivers/net/wireless/ath/ath9k/hw.c ) or ( drivers/net/wireless/ath/ath9k/hw.h )

    Read the article

  • Testing Workflows &ndash; Test-First

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-first.aspxThis is the second of two posts on some common strategies for approaching the job of writing tests.  The previous post covered test-after workflows where as this will focus on test-first.  Each workflow presented is a method of attack for adding tests to a project.  The more tools in your tool belt the better.  So here is a partial list of some test-first methodologies. Ping Pong Ping Pong is a methodology commonly used in pair programing.  One developer will write a new failing test.  Then they hand the keyboard to their partner.  The partner writes the production code to get the test passing.  The partner then writes the next test before passing the keyboard back to the original developer. The reasoning behind this testing methodology is to facilitate pair programming.  That is to say that this testing methodology shares all the benefits of pair programming, including ensuring multiple team members are familiar with the code base (i.e. low bus number). Test Blazer Test Blazing, in some respects, is also a pairing strategy.  The developers don’t work side by side on the same task at the same time.  Instead one developer is dedicated to writing tests at their own desk.  They write failing test after failing test, never touching the production code.  With these tests they are defining the specification for the system.  The developer most familiar with the specifications would be assigned this task. The next day or later in the same day another developer fetches the latest test suite.  Their job is to write the production code to get those tests passing.  Once all the tests pass they fetch from source control the latest version of the test project to get the newer tests. This methodology has some of the benefits of pair programming, namely lowering the bus number.  This can be good way adding an extra developer to a project without slowing it down too much.  The production coder isn’t slowed down writing tests.  The tests are in another project from the production code, so there shouldn’t be any merge conflicts despite two developers working on the same solution. This methodology is also a good test for the tests.  Can another developer figure out what system should do just by reading the tests?  This question will be answered as the production coder works there way through the test blazer’s tests. Test Driven Development (TDD) TDD is a highly disciplined practice that calls for a new test and an new production code to be written every few minutes.  There are strict rules for when you should be writing test or production code.  You start by writing a failing (red) test, then write the simplest production code possible to get the code working (green), then you clean up the code (refactor).  This is known as the red-green-refactor cycle. The goal of TDD isn’t the creation of a suite of tests, however that is an advantageous side effect.  The real goal of TDD is to follow a practice that yields a better design.  The practice is meant to push the design toward small, decoupled, modularized components.  This is generally considered a better design that large, highly coupled ball of mud. TDD accomplishes this through the refactoring cycle.  Refactoring is only possible to do safely when tests are in place.  In order to use TDD developers must be trained in how to look for and repair code smells in the system.  Through repairing these sections of smelly code (i.e. a refactoring) the design of the system emerges. For further information on TDD, I highly recommend the series “Is TDD Dead?”.  It discusses its pros and cons and when it is best used. Acceptance Test Driven Development (ATDD) Whereas TDD focuses on small unit tests that concentrate on a small piece of the system, Acceptance Tests focuses on the larger integrated environment.  Acceptance Tests usually correspond to user stories, which come directly from the customer. The unit tests focus on the inputs and outputs of smaller parts of the system, which are too low level to be of interest to the customer. ATDD generally uses the same tools as TDD.  However, ATDD uses fewer mocks and test doubles than TDD. ATDD often complements TDD; they aren’t competing methods.  A full test suite will usually consist of a large number of unit (created via TDD) tests and a smaller number of acceptance tests. Behaviour Driven Development (BDD) BDD is more about audience than workflow.  BDD pushes the testing realm out towards the client.  Developers, managers and the client all work together to define the tests. Typically different tooling is used for BDD than acceptance and unit testing.  This is done because the audience is not just developers.  Tools using the Gherkin family of languages allow for test scenarios to be described in an English format.  Other tools such as MSpec or FitNesse also strive for highly readable behaviour driven test suites. Because these tests are public facing (viewable by people outside the development team), the terminology usually changes.  You can’t get away with the same technobabble you can with unit tests written in a programming language that only developers understand.  For starters, they usually aren’t called tests.  Usually they’re called “examples”, “behaviours”, “scenarios”, or “specifications”. This may seem like a very subtle difference, but I’ve seen this small terminology change have a huge impact on the acceptance of the process.  Many people have a bias that testing is something that comes at the end of a project.  When you say we need to define the tests at the start of the project many people will immediately give that a lower priority on the project schedule.  But if you say we need to define the specification or behaviour of the system before we can start, you’ll get more cooperation.   Keep these test-first and test-after workflows in your tool belt.  With them you’ll be able to find new opportunities to apply them.

    Read the article

  • Getting Started With Tailoring Business Processes

    - by Richard Bingham
    In this article, and for the sake of simplicity, we will use the term “On-Premise” to mean a deployment where you have design-time development access to the instance, including administration of the technology components, the applications filesystem, and the database. In reality this might be a local development instance that is then supported by a team who can deploy your customizations to the restricted production instance equivalents. Tools Overview Firstly let’s look at the Design-Time tools within JDeveloper for customizing and extending the artifacts of a Business Process. In essence this falls into two buckets; SOA Composite Editor for working with BPEL processes, and the BPM Studio. The SOA Composite Editor As a standard extension to JDeveloper, this graphical design tool should be familiar to anyone previously worked with Oracle SOA Server. With easy-to-use modeling capability, backed-up by full XML source-view (for read-only), it provides everything that is needed to implement the technical design. In simple terms, once deployed to the remote SOA Server the composite components (like Mediator) leverage the Event Delivery Network (EDN) for interaction with the application logic. If you are customizing an existing Fusion Applications BPEL process then be aware that it does support MDS-based customization layers just like Page Composer where different customizations are used based on the run-time context, like for a specific Product or Business Unit. This also makes them safe from patching and upgrades, although only a single active version of the composite is available at run-time. This is defined by a field on the composite record, available in Enterprise Manager. Obviously if you wish to fire different activities and tasks based on the user context then you can should include switches to fork the flows in your custom BPEL process. Figure 1 – A BPEL process in Composite Editor The following describes the simplified steps for making customizations to BPEL processes. This is the most common method of changing the business processes of Fusion Applications, as over 400 BPEL-based composite applications are provided out-of-the-box. Setup your local Fusion Applications JDeveloper environment. The SOA Composite Editor should be installed as part of the Fusion Applications extension. If there are problems you can also find it under the ‘Check for Updates’ help menu option. Since SOA Server is not part of the JDeveloper integrated WebLogic Server, setup a standalone WebLogic environment for deploying and testing. Obviously you might use a Fusion Applications development instance also. Package the existing standard Fusion Applications SOA Composite using Enterprise Manager and export it as a complete SOA Archive (SAR) file, resulting in a local .jar file. You may need to ask your system administrator for this. Import the exported SAR .jar file into JDeveloper using the File menu, under the option ‘SOA Archive into SOA Project’. In JDeveloper set the appropriate customization layer values, and then change from the default role to the Fusion Applications Customization Developer role. Make the customizations and save the application project. Finally redeploy the composite application, either to a direct Application Server connection, or as a fresh SAR (jar) file that can then be re-imported and deployed via Enterprise Manager. The Business Process Management (BPM) Suite In addition to the relatively low-level development environment associated with BPEL process creation, Oracle provides a suite of products that allow business process adjustments to be made without the need for some of the programming skills.  The aim is to abstract much of the technical implementation and to provide a Business Analyst tools for immediately implementing organization changes. Obviously there are some limitations on what they can do, however the BPM Suite functionality increases with each release and for the majority of the cases the tools remains as applicable as its developer-orientated sister. At the current time business processes must be explicitly coded to support just one of these use-cases, either BPEL for developer use or BPM for business analyst use. That said, they both run on the same SOA Server in much the same way. The components bundled in each SOA Composite Application can be verified by inspection through Enterprise Manager. Figure 2 – A BPM Process in JDeveloper BPM Suite. BPM processes are written in a standard notation (BPMN) and the modeling tools are very similar to that of BPEL. The steps to deploy a custom BPM process are also essentially much the same, since the BPM process is bundled into a SOA Composite just like a BPEL process. As such the SOA Composite Editor  actually has support for both artifacts and even allows use of them together, such as a calling a BPM process as a partnerlink from a BPEL process. For more details see the references below. Business Analyst Tooling In addition to using JDeveloper extensions for BPM development, there are run-time tools that Business Analysts can use to make adjustments, so that without high costs of an IT project the system can be tuned to match changes to the business operation. The first tool to consider is the BPM Composer, deployed with the middleware SOA Server and accessible online, and for Fusion Applications it is under the Business Process icon on the homepage of the Application Composer. Figure 3 – Business Process Composer showing a CRM process flow. The key difference between this and using JDeveloper is that the BPM Composer has a Business Catalog prepopulated with features and functions that can be used, mostly through registered WebServices. This means no coding or complex interface development is required, simply drag-drop-configure. The items in the business catalog are seeded by either Oracle (as a BPM Template) or added to by your own custom development. You cannot create or generate catalog content from BPM Composer directly. As per the screenshot you can see the Business Catalog content in the BPM Project browser region. In addition, other online tools for use by Business Analysts include the BPM Worklist application for editing business rules and approval management configuration, plus the SOA Composer which focuses on non-approval business rules and domain value maps. At the current time there are only a handful of BPM processes shipped with Fusion Applications HCM and CRM, including on-boarding workers and processing customer registrations.  This also means a limited number of associated BPM Templates provided out-of-the-box, therefore a limited Business Catalog. That said, BPM-based extension is a powerful capability to leverage and will most likely develop going forwards, especially for use in SaaS deployments where full design-time JDeveloper access is not available. Further Reading For BPEL – Fusion Applications Extensibility Guide – Section 12 For BPM – Fusion Applications Extensibility Guide – Section 7 The product-specific documentation and implementation guides for Fusion Applications Fusion Middleware Developers Guide for SOA Suite Modeling and Implementation Guide for Oracle Business Process Management User’s Guide for Oracle Business Process Composer Oracle University courses on BPM Suite and SOA Development

    Read the article

  • Troubleshooting Windows Authentication problems (no challenge) in IIS 7.5?

    - by Aaronaught
    I know that there are thousands of reports of people having trouble getting Integrated Windows Authentication to work with IIS, but they all seem to lead to web pages that don't apply or solutions that I've already tried. I've deployed dozens of sites like this before, so either there's something bizarre going on with the server/configuration, or I've been looking at this too long and not seeing the obvious. Simply put, everything works perfectly on my local machine, but falls apart on the production server, which as far as I can tell has the exact same configuration. On the local machine: The machine is running Windows 7 Ultimate, Service Pack 1, IIS 7.5. The site has been tested successfully, using both IIS and the VS Web Development Server. The IIS site config has all authentication methods disabled except Windows Authentication. The local machine is not on any domain. The Providers set up are Negotiate and NTLM (not Negotiate:Kerberos). Extended Protection is Off. All browsers tested (IE, Firefox, Chrome) show the challenge prompt and allow me to log in to the localhost domain with my (local) Windows account. All browsers tested also work using an opaque local IP address - so the browsers themselves don't seem to care whether the site appears "local" or "remote". I've added a display line to the web page which shows the currently-logged-in user and it shows exactly what I would expect (whichever local user I logged in with). On the remote machine: The server is running Windows Server 2008 R2, IIS 7.5. Loading the web page results in an immediate 401.2 error: You are not authorized to view this page due to invalid authentication headers. No challenge prompt ever appears. The IIS site config has all authentication methods disabled except Windows Authentication. The remote machine is not on any domain. The Providers set up are Negotiate and NTLM (not Negotiate:Kerberos). Extended Protection is Off. On the remote machine (remote desktop session), the same error appears in Internet Explorer regardless of whether the domain is localhost or the external IP address. If I try to view the remote web site from my local machine, the error is still 401, but a slightly different 401. No subcode, with the text: Access is denied due to invalid credentials. The Windows Authentication IIS role feature is installed. The WindowsAuthentication Module is added (at the Server level). The exact same error occurs if I turn off Windows Authentication and enable Basic Authentication. The site does load if I turn off Windows Authentication and enable Anonymous (obviously). I've already followed all of the troubleshooting steps on Microsoft Support: Troubleshooting HTTP 401 errors in IIS I've already tried the workaround shown on another Microsoft support page (supposedly to force NTLM as the only method). Last but not least, I tried turning on FREB for 401.2 errors and the results don't seem to tell me anything useful, all I see is the following warning: MODULE_SET_RESPONSE_ERROR_STATUS ModuleName IIS Web Core Notification 2 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 2 ErrorCode 2147942405 ConfigExceptionInfo Notification AUTHENTICATE_REQUEST ErrorCode Access is denied. (0x80070005) ...this seems to just be telling me what I already know (that it's simply rejecting the request instead of negotiating the credentials). The trace does indicate that the WindowsAuthentication module is correctly loaded because there is a NOTIFY_MODULE_START line with ModuleName = WindowsAuthentication (and various other ASP.NET follow-up events - [un]fortunately, no interesting errors or warnings here). Can anyone tell me what I might be missing here? Quick Update: I'm a little uncomfortable sending a whole Wireshark dump as it would reveal IPs, URLs and other stuff, but I did a side-by-side comparison of the HTTP responses from localhost and the remote server in Fiddler, and it seems fairly self-evident what the problem is: Localhost: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM X-Powered-By: ASP.NET Date: Sat, 17 Dec 2011 23:42:34 GMT Content-Length: 6399 Proxy-Support: Session-Based-Authentication Remote: HTTP/1.1 401 Unauthorized Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Sat, 17 Dec 2011 23:43:13 GMT Content-Length: 1293 Aside from a few seemingly-inconsequential differences like cache-control, the main difference is that the remote server is not sending the WWW-Authenticate headers back to the client. So, I guess that narrows the question down to: Why is IIS not sending WWW-Authenticate headers when Windows Authentication appears to be installed, loaded, and exclusively enabled?

    Read the article

  • Deploying an HttpHandler web service

    - by baron
    I am trying to build a webservice that manipulates http requests POST and GET. Here is a sample: public class CodebookHttpHandler: IHttpHandler { public void ProcessRequest(HttpContext context) { if (context.Request.HttpMethod == "POST") { //DoHttpPostLogic(); } else if (context.Request.HttpMethod == "GET") { //DoHttpGetLogic(); } } ... public void DoHttpPostLogic() { ... } public void DoHttpGetLogic() { ... } I need to deploy this but I am struggling how to start. Most online references show making a website, but really, all I want to do is respond when an HttpPost is sent. I don't know what to put in the website, just want that code to run. Edit: I am following this site as its exactly what I'm trying to do. I have the website set up, I have the code for the handler in a .cs file, i have edited the web.config to add the handler for the file extension I need. Now I am at the step 3 where you tell IIS about this extension and map it to ASP.NET. Also I am using IIS 7 so interface is slightly different than the screenshots. This is the problem I have: 1) Go to website 2) Go to handler mappings 3) Go Add Script Map 4) request path - put the extension I want to handle 5) Executable- it seems i am told to set aspnet_isapi.dll here. Maybe this is incorrect? 6) Give name 7) Hit OK button: Add Script Map Do you want to allow this ISAPI extension? Click "Yes" to add the extension with an "Allowed" entry to the ISAPI and CGI Restrictions list or to update an existing extension entry to "Allowed" in the ISAPI and CGI Restrictions list. Yes No Cancel 8) Hit Yes Add Script Map The specified module required by this handler is not in the modules list. If you are adding a script map handler mapping, the IsapiModule or the CgiModule must be in the modules list. OK edit 2: Have just figured out that that managed handler had something to do with handlers witten in managed code, script map was to help configuring an executable and module mapping to work with http Modules. So I should be using option 1 - Add Managed Handler. See: http://yfrog.com/11managedhandlerp I know what my request path is for the file extension... and I know name (can call it whatever I like), so it must be the Type field I am struggling with. In the applications folder (in IIS) so far I just have the MyHandler.cs and web.config (Of course also a file with the extension I am trying to create the handler for!) edit3: progress So now I have the code and the web.config set up I test to see If I can browse to the filename.CustomExtension file: HTTP Error 404.3 - Not Found The page you are requesting cannot be served because of the extension configuration. If the page is a script, add a handler. If the file should be downloaded, add a MIME map. So in IIS7 I go to Handler Mappings and add it in. See this MSDN example, it is exactly what I am trying to follow The class looks like this: using System.Web; namespace HandlerAttempt2 { public class MyHandler : IHttpHandler { public MyHandler() { //TODO: Add constructor logic here } public void ProcessRequest(HttpContext context) { var objResponse = context.Response; objResponse.Write("<html><body><h1>It just worked"); objResponse.Write("</body></html>"); } public bool IsReusable { get { return true; } } } } I add the Handler in as follows: Request path: *.whatever Type: MyHandler (class name - this appears correct as per example!) Name: whatever Try to browse to the custom file again (this is in app pool as Integrated): HTTP Error 500.21 - Internal Server Error Handler "whatever" has a bad module "ManagedPipelineHandler" in its module list Try to browse to the custom file again (this is in app pool as CLASSIC): HTTP Error 404.17 - Not Found The requested content appears to be script and will not be served by the static file handler.

    Read the article

  • People Picker can't find Forms Authentication Users in WSS 3.0

    - by beyti
    I used a lot of tutorials to turn my windows authenticated default wss web app to use Forms Authentication. What I've done since; 1. created a web app. and a site in wss 3.0. Made its anonymous access enabled for all site content. This wss app is in the "wss3" server. 2. created a membership db with regsql.exe in .net framework folder.Created it with its default settings, like aspnetdb named database.This db is in the "sqlserver" server. 3. gave db.owner permission to the web app. admin of wss to the aspnetdb database. The user is registered under the same domain as the sql and the wss machines. 4. configured site's web.config file with following changes/adds: ..added the connectionString: <connectionStrings> <clear /> <add name="LocalSqlServer" connectionString="server=sqlserver;database=aspnetdb; Integrated Security=SSPI" providerName="System.Data.SqlClient" /> </connectionStrings> ..added the membershipProvider: <membership> <providers> <add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="LocalSqlServer" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="true" applicationName="/" requiresUniqueEmail="false" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="7" minRequiredNonalphanumericCharacters="1" passwordAttemptWindow="10" passwordStrengthRegularExpression="" /> </providers> </membership> ..also checked the peoplepicker settings: <PeoplePickerWildcards> <clear /> <add key="AspNetSqlMembershipProvider" value="%" /> </PeoplePickerWildcards> 5. After all, I changed the application provider of the site I created to use forms. Gave it the provider name of "AspNetSqlMembershipProvider". 6. I've created some users for Forms Authentication via ASP.net Configuration page by visual studio. 7. Checked the users in the db aspnetdb. They are there. 8. Tried to login to wss with one of them. Successfully logged in. With no privilages ofcourse. 9. Tried to give permission via Web Application Policy to that user which logged in. 10. People Picker couldn't find it at all. Any of the forms users couldn't be found. But it clearly tells that AD connection is also changed that none of the AD users couldn't be found either. It seems I'm missing something to configure about people picker. Any help would be appreciated. Thanks in advance. Beytan

    Read the article

  • Interactive Data Language, IDL: Does anybody care?

    - by Alex
    Anyone use a language called Interactive Data Language, IDL? It is popular with scientists. I think it is a poor language because it is proprietary (every terminal running it has to have an expensive license purchased) and it has minimal support (try searching for IDL, the language, right now on stack) . I am trying to convince my colleagues to stop using it and learn C/C++/Python/Fortran/Java/Ruby. Does anybody know about or even care about IDL enough to have opinions on it? What do you think of it? Should I tell my colleagues to stop wasting their time on it now? How can I convince them? Edit: People are getting the impression that I don't know or use IDL. Also, I said IDL has minimal support which is true in one sense, so I must clarify that the scientific libraries are indeed large. I use IDL all the time, but this is exactly the problem: I am only using IDL because colleagues use it. There is a file format IDL uses, the .sav, which can only be opened in IDL. So I must use IDL to work with this data and transfer the data back to colleagues, but I know I would be more efficient in another language. This is like someone sending you a microsoft word file in an email attachment and if you don't understand how wrong that is then you probably write too many words not enough code and you bought microsoft word. Edit: As an alternative to IDL Python is popular. Here is a list of The Pros of IDL (and the cons) from AstroBetter: Pros of IDL Mature many numerical and astronomical libraries available Wide astronomical user base Numerical aspect well integrated with language itself Many local users with deep experience Faster for small arrays Easier installation Good, unified documentation Standard GUI run/debug tool (IDLDE) Single widget system (no angst about which to choose or learn) SAVE/RESTORE capability Use of keyword arguments as flags more convenient Cons of IDL Narrow applicability, not well suited to general programming Slower for large arrays Array functionality less powerful Table support poor Limited ability to extend using C or Fortran, such extensions hard to distribute and support Expensive, sometimes problem collaborating with others that don’t have or can’t afford licenses. Closed source (only RSI can fix bugs) Very awkward to integrate with IRAF tasks Memory management more awkward Single widget system (useless if working within another framework) Plotting: Awkward support for symbols and math text Many font systems, portability issues (v5.1 alleviates somewhat) not as flexible or as extensible plot windows not intrinsically interactive (e.g., pan & zoom) Pros of Python Very general and powerful programming language, yet easy to learn. Strong, but optional, Object Oriented programming support Very large user and developer community, very extensive and broad library base Very extensible with C, C++, or Fortran, portable distribution mechanisms available Free; non-restrictive license; Open Source Becoming the standard scripting language for astronomy Easy to use with IRAF tasks Basis of STScI application efforts More general array capabilities Faster for large arrays, better support for memory mapping Many books and on-line documentation resources available (for the language and its libraries) Better support for table structures Plotting framework (matplotlib) more extensible and general Better font support and portability (only one way to do it too) Usable within many windowing frameworks (GTK, Tk, WX, Qt…) Standard plotting functionality independent of framework used plots are embeddable within other GUIs more powerful image handling (multiple simultaneous LUTS, optional resampling/rescaling, alpha blending, etc) Support for many widget systems Strong local influence over capabilities being developed for Python Cons of Python More items to install separately Not as well accepted in astronomical community (but support clearly growing) Scientific libraries not as mature: Documentation not as complete, not as unified Not as deep in astronomical libraries and utilities Not all IDL numerical library functions have corresponding functionality in Python Some numeric constructs not quite as consistent with language (or slightly less convenient than IDL) Array indexing convention “backwards” Small array performance slower No standard GUI run/debug tool Support for many widget systems (angst regarding which to choose) Current lack of function equivalent to SAVE/RESTORE in IDL matplotlib does not yet have equivalents for all IDL 2-D plotting capability (e.g., surface plots) Use of keyword arguments used as flags less convenient Plotting: comparatively immature, still much development going on missing some plot type (e.g., surface) 3-d capability requires VTK (though matplotlib has some basic 3-d capability)

    Read the article

  • Linking LLVM JIT Code to Static LLVM Libraries?

    - by inflector
    I'm in the process of implementing a cross-platform (Mac OS X, Windows, and Linux) application which will do lots of CPU intensive analysis of financial data. The bulk of the analysis engine will be written in C++ for speed reasons, with a user-accessible scripting engine interfacing with the C++ testing engine. I want to write several scripting front-ends over time to emulate other popular software with existing large user bases. The first front will be a VisualBasic-like scripting language. I'm thinking that LLVM would be perfect for my needs. Performance is very important because of the sheer amount of data; it can take hours or days to run a single run of tests to get an answer. I believe that using LLVM will also allow me to use a single back-end solution while I implement different front-ends for different flavors of the scripting language over time. The testing engine itself will be separated from the interface and testing will even take place in a separate process with progress and results being reported to the testing management interface. Tests will consist of scripting code integrated with the testing engine code. In a previous implementation of a similar commercial testing system I wrote, I built a fast interpreter which easily interfaced with the testing library because it was written in C++ and linked directly to the testing engine library. Callbacks from scripting code to testing library objects involved translating between the formats with significant overhead. I'm imagining that with LLVM, I could implement the callbacks into C++ directly so that I could make the scripting code work almost as if it had been written in C++. Likewise, if all the code was compiled to LLVM byte-code format, it seems like the LLVM optimizers could optimize across the boundaries between the scripting language and the testing engine code that was written in C++. I don't want to have to compile the testing engine every time. Ideally, I'd like to JIT compile only the scripting code. For small tests, I'd skip some optimization passes, while for large tests, I'd perform full optimizations during the link. So is this possible? Can I precompile the testing engine to a .o object file or .a library file and then link in the scripting code using the JIT? Finally, ideally, I'd like to have the scripting code implement specific methods as subclasses for a specific C++ class. So the C++ testing engine would only see C++ objects while the JIT setup code compiled scripting code that implemented some of the methods for the objects. It seems that if I used the right name mangling algorithm it would be relatively easy to set up the LLVM generation for the scripting language to look like a C++ method call which could then be linked into the testing engine. Thus the linking stage would go in two directions, calls from the scripting language into the testing engine objects to retrieve pricing information and test state information and calls from the testing engine of methods of some particular C++ objects where the code was supplied not from C++ but from the scripting language. In summary: 1) Can I link in precompiled (either .bc, .o, or .a) files as part of the JIT compilation, code-generation process? 2) Can I link in code using that the process in 1) above in such a way that I am able to create code that acts as if it was all written in C++?

    Read the article

  • What is wrong with the following Fluent NHibernate Mapping ?

    - by ashraf
    Hi, I have 3 tables (Many to Many relationship) Resource {ResourceId, Description} Role {RoleId, Description} Permission {ResourceId, RoleId} I am trying to map above tables in fluent-nHibernate. This is what I am trying to do. var aResource = session.Get<Resource>(1); // 2 Roles associated (Role 1 and 2) var aRole = session.Get<Role>(1); aResource.Remove(aRole); // I try to delete just 1 role from permission. But the sql generated here is (which is wrong) Delete from Permission where ResourceId = 1 Insert into Permission (ResourceId, RoleId) values (1, 2); Instead of (right way) Delete from Permission where ResourceId = 1 and RoleId = 1 Why nHibernate behave like this? What wrong with the mapping? I even tried with Set instead of IList. Here is the full code. Entities public class Resource { public virtual string Description { get; set; } public virtual int ResourceId { get; set; } public virtual IList<Role> Roles { get; set; } public Resource() { Roles = new List<Role>(); } } public class Role { public virtual string Description { get; set; } public virtual int RoleId { get; set; } public virtual IList<Resource> Resources { get; set; } public Role() { Resources = new List<Resource>(); } } Mapping Here // Mapping .. public class ResourceMap : ClassMap<Resource> { public ResourceMap() { Id(x => x.ResourceId); Map(x => x.Description); HasManyToMany(x => x.Roles).Table("Permission"); } } public class RoleMap : ClassMap<Role> { public RoleMap() { Id(x => x.RoleId); Map(x => x.Description); HasManyToMany(x => x.Resources).Table("Permission"); } } Program static void Main(string[] args) { var factory = CreateSessionFactory(); using (var session = factory.OpenSession()) { using (var tran = session.BeginTransaction()) { var aResource = session.Get<Resource>(1); var aRole = session.Get<Role>(1); aResource.Remove(aRole); session.Save(a); session.Flush(); tran.Commit(); } } } private static ISessionFactory CreateSessionFactory() { return Fluently.Configure() .Database(MsSqlConfiguration.MsSql2008 .ConnectionString("server=(local);database=Store;Integrated Security=SSPI")) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<Program>() .Conventions.Add<CustomForeignKeyConvention>()) .BuildSessionFactory(); } public class CustomForeignKeyConvention : ForeignKeyConvention { protected override string GetKeyName(FluentNHibernate.Member property, Type type) { return property == null ? type.Name + "Id" : property.Name + "Id"; } } Thanks, Ashraf.

    Read the article

  • What is Causing this IIS 7 Web Service Sporadic Connectivity Error?

    - by dpalau
    On sporadic occasions we receive the following error when attempting to call an .asmx web service from a .Net client application: "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host." By sporadic I mean that it might occur zero, once every few days, or a half-dozen times a day for some users. It will never occur for the first web service call of a user. And the subsequent (usually the same) call will always work immediately after the failure. The failures happen across a variety of methods in the service and usually happens between 15-20 seconds (according to the log) from the time of the request. Looking in the IIS site log for the particular call will show one or the other of the following windows error codes: 121: The semaphore timeout period has elapsed. 1236: The network connection was aborted by the local system. Some additional environment details: Running on internal network web farm consisting of two servers running IIS7 on Windows Server 2008 OS. These problems did not occur when running in an older IIS6 web farm of three servers running on Windows Server 2003 (and we use a single IIS6/2003 instance for our development and staging environments with no issues). EDIT: Also, all of these server instances are VMWare virtual machines, not sure if that is a surprise anymore or not. The web service is a .Net 2.0/3.5 compiled .asmx web service that has its own application pool (.Net 2.0, integrated pipeline). Only has Windows Authentication enabled. We have another web service on the farm that uses the same physical path as the primary service, the only difference being that Basic Authentication is enabled. This is used for a portion of our ERP system. Have tried using the same and different application pool - no effect on the error. This site isn't hit as often as the primary site and has never had an error. As mentioned, the error will only happen when called from the .Net client - not from other applications. The client application is always creating a new web service object for each request and setting the service credentials to System.Net.CredentialCache.DefaultCredentials. The application is either deployed locally to a client or run in a Citrix server session. Those users running in Citrix doesn't seem to experience the issue, only locally deployed clients. The Citrix servers and the web farm are located in the same physical location and are located in the same IP range (10.67.xx.xx). Locally deployed clients experiencing the error are located elsewhere (10.105.xx.xx, 10.31.xx.xx). I've checked the OS logs to see if I can see any problems but nothing really sticks out. EDIT: Actually, I myself just ran into the error a little bit ago. I decided to check out the logs again and saw that there was a Security log entry of "Audit Failure" at the 'same' time (IIS log entry at 1:39:59, event log entry at 1:39:50). Not sure if this is a coincidence or not, I'll have to check out the logs of previous errors. I'm probably grasping for straws but the details: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 7/8/2009 1:39:50 PM Event ID: 5159 Task Category: Filtering Platform Connection Level: Information Keywords: Audit Failure User: N/A Computer: is071019.<**.net Description: The Windows Filtering Platform has blocked a bind to a local port. Application Information: Process ID: 1260 Application Name: \device\harddiskvolume1\windows\system32\svchost.exe Network Information: Source Address: 0.0.0.0 Source Port: 54802 Protocol: 17 Filter Information: Filter Run-Time ID: 0 Layer Name: Resource Assignment Layer Run-Time ID: 36 I've also tried to use Failed Request Tracing in IIS7 but the service call never actually gets to where FRT can capture it (even though the failure is logged in the web service log). The network infrastructure group said they checked out the DNS and any NIC settings are correct so there is no 'flapping'. Everything pans out. I'm not sure that they checked out any domain controller servers though to see if that could be an issue. Any ideas? Or any other debugging strategies to get to the bottom of this? I'm just the developer in charge of the software and don't really have the knowledge on what to investigate from the networking side of things - although it does sound like a networking issue to me based on what is happening. Thanks in advance for any help.

    Read the article

  • PHP framework question

    - by iconiK
    I'm currently working on a browser-based MMO and have chosen the LAMP stack because of the extremely low cost to start with in production (versus Windows + IIS + ASP.NET/C# + SQL Server, even though I have MSDN Universal). However I will need a PHP framework for this as it's no easy task. I am not restricted by anything other than the ability to run on Linux, as I will use a dedicated cloud hosting solution (and a VMWare image for development) and can configure it as needed. In no specific order: It has to be easily scalable; this is crucial. If the game becomes a steady success it will eventually outgrow the server beyond what the host provides and would have to be moved to several load-balanced servers. It is crucial that this can be done with minimum effort. I do know this might require following strict conventions, so if you know of any for your suggested framework please explain what would be needed. It has to provide modules for all the core tasks: authentication, ACL, database access, MVC, and so on. One or two missing modules are fine, as long as they can easily be written and integrated. It should support internationalization. I think there is no excuse for any web framework not to provide means of translating the application and switching between languages without a lot of effort from the programmer. Must have very good community support and preferably commercial support as well. Yes, I do know QCodo/QCubed is so nice, but it is not mature enough for this task. Smooth AJAX support is required. Whether the framework comes with AJAX-capable widgets or has an easy way of adding AJAX is not relevant, as long as AJAX is easily doable. I plan to use jQuery + Dojo or one of them alone - not exactly sure. Auto-magically doing stuff when it improves readability and relieves a lot of effort would be especially nice if it is generally reliable and does not interfere with other requirements. This seems to be the case of CakePHP. I have read a lot of comparisons and I know it's a really hot debate. The general answer is "try and see for yourself what suits you". However, I can't say it is easy for this task and I'm calling for your experience with building applications with similar requirements. So far I'm tied up between Zend and CakePHP by the general criteria, however, all well-known frameworks offer the same functionality in some way or another with different approaches each with it's own advantages and disadvantages. Edits: I am kinda new to MVC, however, I am willing to learn it and I don't care if a framework is easier for those new to MVC. I have lots of time to learn MVC and any other architectures (or whatever they're called) you recommend. I will use Zend as a utility "framework", even though it's just a collection of libraries (some good ones though, as I have been told). Current PHP contenders are: CakePHP, Kohana, Zend alone.

    Read the article

  • Should we develop a custom membership provider in this case?

    - by Allen
    I'll be adding a bounty to this, probably 200, more if you guys think its appropriate. I wont accept an answer until I can add a bounty so feel free to go ahead and answer now Summary Long story short, we've been tasked with gutting the authentication and authorization parts of a fairly old and bloated asp.net application that previously had all of these components written from scratch. Since our application isn't a typical one, and none of us have experience in asp.net's built in membership provider stuff, we're not sure if we should roll our own authentication and authorization again or if we should try to work within the asp.net membership provider mindset and develop our own membership provider. Our Application We have a fairly old asp.net application that gets installed at customer locations to service clients on a LAN. Admins create users (users do not sign up) and depending on the install, we may have the software integrated with LDAP. Currently, the LDAP integration bulk-imports the users to our database and when they login, it authenticates against LDAP so we dont have to manage their passwords. Nothing amazing there. Admins can assign users to 1 group and they can change the authorization of that group to manage access to various parts of the software. Groups are maintained by Admins (web based UI) and as said earlier, granted / denied permissions to certain functionality within the application. All this was completely written from the ground up without using any of the built in .net authorization or authentication. We literally have IsLoggedIn() methods that check for login and redirect to our login page if they aren't. Our Rewrite We've been tasked to integrate more tightly with LDAP, they want us to tie groups in our application to groups (or whatever types of containers that LDAP uses) in LDAP so that when a customer opt's to use our LDAP integration, they dont have to manage their users in LDAP AND in our application. The new way, they will simply create users in LDAP, add them to Groups in LDAP and our application will see that they belong to the appropriate LDAP group and authenticate and authorize them. In addition, we've been granted the go ahead to completely rip out the User authentication and authorization code and completely re-do it. Our Problem The problem is that none of us have any experience with asp.net membership provider functionality. The little bit of exposure I have to it makes me worry that it was not intended to be used for an application such as ours. Though, developing our own ASP.NET Membership Provider and Role Manager sounds like it would be a great experience and most likely the appropriate thing to do. Basically, I'm looking for advice, should we be using the ASP.NET Membership provider & Role Management API or should we continue to roll our own? I know this decision will be influenced by our requirements so I'm going over them below Our Requirements Just a quick n dirty list Maintain the ability to have a db of users and authenticate them and give admins (only, not users) the ability to CRUD users Allow the site to integrate with LDAP, when this is chosen, they don't want any users stored in the DB, only the relationship between Groups as they exist in our app / db and the Groups/Containers as they exist in LDAP. .net 3.5 is being used (mix of asp.net webforms and asp.net mvc) Has to work in ASP.NET and ASP.NET MVC (shouldn't be a problem I'm guessing) This can't be user centric, administrators need to be the only ones that CRUD (or import via ldap) users and groups We have to be able to Auth via LDAP when its configured to do so I always try to monitor my questions closely so feel free to ask for more info. Also, as a general summary of what I'm looking for in an answer is just. "You should/shouldn't use xyz, here's why". Links regarding asp.net membership provider and role management stuff are very welcome, most of the stuff I'm finding is 5+ years old. Edit: Added some stuff to "Our Rewrite"

    Read the article

  • Asp.net mvc application deployment / security issues

    - by WestDiscGolf
    I'll start with appologies; I wasn't sure if this was best posted here of Server Fault so if its in the wrong place then please move :-) Basic information I have written the first module of a new application at work. This is written using Visual Studio 2010, targetting .net 3.5 (at the moment) and asp.net mvc 2. This has been working fine during development running on the built in Development server from VS but however does not work once deployed to IIS 7/7.5. To deploy the application, I have built it in release mode and created a deployment package by right clicking on the project in the solution explorer (this will be done with an automated build in tfs once upgrade from the beta). This has then been imported into IIS on the server. The application is using windows/domain authentication. Issue #1 I can fire up internet explorer and browse to the application from a client computer as well as on a remote desktop connection. I can execute the code which reads/stores data in Session fine from the IE instance on the remote desktop but if I browse to it from the client pc it seems to lose the session state. I click on the form submit and the page refreshes and doesn't execute the required code. I've tried setting with; InProc, SQLServer and StateServer. but with no luck :-( Issue #2 As part of the application it views PDF and Tiff documents on the fly which are on a network share on the office network and creates thumbnails if the document hasn't been viewed before. This works if running on the machine the application is deployed to; however when browsing from a client pc I get an error saying: Access to the path '\\fileserver\folder\file.tif' is denied Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.UnauthorizedAccessException: Access to the path '\\fileserver\folder\file.TIF' is denied. ASP.NET is not authorized to access the requested resource. Consider granting access rights to the resource to the ASP.NET request identity. ASP.NET has a base process identity (typically {MACHINE}\ASPNET on IIS 5 or Network Service on IIS 6) that is used if the application is not impersonating. If the application is impersonating via , the identity will be the anonymous user (typically IUSR_MACHINENAME) or the authenticated request user. As this is on a different server the user is not accessible. To get round this I have tried: 1 - setting the application pool to run as domain administrator (I know this is a security risk, but I'm just trying to get it to work at the moment!) 2 - to set the log on account for World Wide Web Publishing service to be the domain admin . When trying to restart the service I get ... Windows could not start the World Wide Web Publishing Service service on the Local Computer. Error 1079: The account specified for this service is different from the account specified fro the other services running in the same process. Any pointers/help would be much appriciated as I'm pulling my hair out (of what little I have left). Update I've been using this funky little tool I found - DelegConfig v2 beta (Delegation / Kerberos Configuration Tool). This has been really usefull. So I've got the accessing of the file share working (there is a test page which will read the files) so now I've just got the issue of passing through the users credentials through to the SQL Server (wans't my choice to do it this way!!) to execute the queries etc. but I can't get it to log on as the user. It tries to access it as "NT Authority\Network Service" which doesn't have a sql login (as should be the logged on user). My connection string is: <add name="User" connectionString="Data Source=.;Integrated Security=True" providerName="System.Data.SqlClient" /> No initial catalog is specified as the system is over multiple dbs (also wasn't my choice!!). I really appriciate all the help so far! :-) Any further hints?!

    Read the article

  • Installing Windows on HP Proliant Servers without SmartStart

    - by Fitzroy
    I have a PXE server for deploying Windows XP and Windows 7 to workstations. The process is as follows: Boot the workstation from the NIC. Workstation sends a DHCP request. DHCP server responds with an IP address and the location of the PXE server. Workstation downloads WinPE image file from PXE server via TFTP Workstation stores WinPE image file in memory and executes it. Once booted into WinPE, I connect to a network share to gain access to either the Windows XP or Windows 7 installation files. A custom script is launched to guide you through the process of formatting and partitioning the hard drive(s) (using DISKPART and FORMAT). Another custom script asks for details such as the hostname to assign to the workstation. The answers provided are used to build an unattended answer file (SIF [Setup Information File] for WinXP and XML for Win7). The Windows setup EXE is launched, passing the unattended answer file to it as a parameter. The Windows XP and Windows 7 installation sources have been customised to include the drivers for our Dell workstations. They also run a number of scripts upon first booting up to install software packages. This process works very well for our workstations and I would now like to use it for building our servers too. The vast majority of our servers are HP Proliant DL360 G6, DL380 G5 and DL380 G6. They’re running Windows Server 2003 (various editions) or 2008 (various editions). To date, we have always built the HP Proliant servers using the SmartStart CD provided. SmartStart does three useful things for us: Setup RAID with HP Array Configuration Utility (ACU). Installs and configures SNMP Installs various HP Tools for Windows (HP Array Configuration Utility, HP Array Diagnostic Utility, HP Proliant Integrated Management Log Viewer, etc) Using SmartStart I have never had to manually download and install Windows drivers for network, sound, video, etc. I'm not sure if this is because SmartStart copies drivers from the CD during setup, or whether Windows just has the drivers natively in its driver CAB. If I abandon the SmartStart CD in favour of my PXE server I would have to do the following: As I wont have access to ACU, I'll configure the RAID (before booting to the PXE server) by pressing F8 (during the boot process) to access Option ROM Configuration for Arrays (ORCA). Installation of SNMP and the HP Tools will have to be installed once the Windows installation is complete using the Proliant Support Pack. Is this method OK? Is there anything that the SmartStart CD does that I'll be unable to do by other means? Are there any disadvantages to not using the SmartStart CD? Many thanks. UPDATE 05/01/12 I’ve been reading through the SmartStart Scripting Toolkit documentation. The scripting toolkit contains command line tools which work within WinPE and can such things as configure BIOS settings, configure an array and setup ILO. I’m personally not too bothered about configuring BIOS settings as I rarely deviate from the defaults (unless the server is to be a Hyper-V host). I’m not too fussed about being able to configure the array from within WinPE, as I’m happy to just press F8 and use Option ROM Configuration for Arrays (ORCA). Although, if it’s easy enough to do, I will explore this further, as it saves time if everything can be configured from within WinPE. One of the nice features all the tools possess is that you can pass input files to them. EG. Configure one server to your requirements, capture its configuration to a file (using the appropriate tool), you can then use the tool on other servers passing the input file with the captured configuration. Array controller drivers appear to be included with the toolkit along with example of how to incorporate them within a WinPE build. I suppose WinPE won’t be able to see logical volumes (I.E 2x physical disks in a RAID 1 configuration) without the array controller drivers? I mentioned in my post that SmartStart normally installs a bunch of Windows HP tools for you. I’ve had a look today, and if you run the SmartStart CD from within Windows all the tools can be installed. Therefore I can do this after the Windows installation is complete. The SmartStart CD appears to contain a lot Windows drivers. I can customise my Windows 2008 source to incorporate these drivers. However, I understand that incorporating an array controller driver is a little different to most drivers. I believe that you have to provide the driver during the very early stages of the Windows setup. I’m working through the Scripting Toolkit documentation to try and work this out...

    Read the article

  • GLSL Error: failed to preprocess the source. How can I troubleshoot this?

    - by Brent Parker
    I'm trying to learn to play with OpenGL GLSL shaders. I've written a very simple program to simply create a shader and compile it. However, whenever I get to the compile step, I get the error: Error: Preprocessor error Error: failed to preprocess the source. Here's my very simple code: #include <GL/gl.h> #include <GL/glu.h> #include <GL/glut.h> #include <GL/glext.h> #include <time.h> #include <stdio.h> #include <iostream> #include <stdlib.h> using namespace std; const int screenWidth = 640; const int screenHeight = 480; const GLchar* gravity_shader[] = { "#version 140" "uniform float t;" "uniform mat4 MVP;" "in vec4 pos;" "in vec4 vel;" "const vec4 g = vec4(0.0, 0.0, -9.80, 0.0);" "void main() {" " vec4 position = pos;" " position += t*vel + t*t*g;" " gl_Position = MVP * position;" "}" }; double pointX = (double)screenWidth/2.0; double pointY = (double)screenWidth/2.0; void initShader() { GLuint shader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(shader, 1, gravity_shader, NULL); glCompileShader(shader); GLint compiled = true; glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled); if(!compiled) { GLint length; GLchar* log; glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &length); log = (GLchar*)malloc(length); glGetShaderInfoLog(shader, length, &length, log); std::cout << log <<std::endl; free(log); } exit(0); } bool myInit() { initShader(); glClearColor(1.0f, 1.0f, 1.0f, 0.0f); glColor3f(0.0f, 0.0f, 0.0f); glPointSize(1.0); glLineWidth(1.0f); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluOrtho2D(0.0, (GLdouble) screenWidth, 0.0, (GLdouble) screenHeight); glEnable(GL_DEPTH_TEST); return true; } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); glutInitWindowSize(screenWidth, screenHeight); glutInitWindowPosition(100, 150); glutCreateWindow("Mouse Interaction Display"); myInit(); glutMainLoop(); return 0; } Where am I going wrong? If it helps, I am trying to do this on a Acer Aspire One with an atom processor and integrated Intel video running the latest Ubuntu. It's not very powerful, but then again, this is a very simple shader. Thanks a lot for taking a look!

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >