Search Results

Search found 300 results on 12 pages for 'tt'.

Page 11/12 | < Previous Page | 7 8 9 10 11 12  | Next Page >

  • Project Naming Convention Feedback Please

    - by Sam Striano
    I am creating a ASP.NET MVC 3 application using Entity Framework 4. I am using the Repository/Service Pattern and was looking for feedback. I currently have the following: MVC Application (GTG.dll) GTG GTG.Controllers GTG.ViewModels Business POCO's (GTG.Business.dll) This contains all business objects (Customer, Order, Invoice, etc...) EF Model/Repositories (GTG.Data.dll) GTG.Business (GTG.Context.tt) I used the Entity POCO Generator Templates. GTG.Data.Repositories Service Layer (GTG.Data.Services.dll) GTG.Data.Services - Contains all of the service objects, one per aggregate root. The following is a little sample code: Controller Namespace Controllers Public Class HomeController Inherits System.Web.Mvc.Controller Function Index() As ActionResult Return View(New Models.HomeViewModel) End Function End Class End Namespace Model Namespace Models Public Class HomeViewModel Private _Service As CustomerService Public Property Customers As List(Of Customer) Public Sub New() _Service = New CustomerService _Customers = _Service.GetCustomersByBusinessName("Striano") End Sub End Class End Namespace Service Public Class CustomerService Private _Repository As ICustomerRepository Public Sub New() _Repository = New CustomerRepository End Sub Function GetCustomerByID(ByVal ID As Integer) As Customer Return _Repository.GetByID(ID) End Function Function GetCustomersByBusinessName(ByVal Name As String) As List(Of Customer) Return _Repository.Query(Function(x) x.CompanyName.StartsWith(Name)).ToList End Function End Class Repository Namespace Data.Repositories Public Class CustomerRepository Implements ICustomerRepository Public Sub Add(ByVal Entity As Business.Customer) Implements IRepository(Of Business.Customer).Add End Sub Public Sub Delete(ByVal Entity As Business.Customer) Implements IRepository(Of Business.Customer).Delete End Sub Public Function GetByID(ByVal ID As Integer) As Business.Customer Implements IRepository(Of Business.Customer).GetByID Using db As New GTGContainer Return db.Customers.FirstOrDefault(Function(x) x.ID = ID) End Using End Function Public Function Query(ByVal Predicate As System.Linq.Expressions.Expression(Of System.Func(Of Business.Customer, Boolean))) As System.Linq.IQueryable(Of Business.Customer) Implements IRepository(Of Business.Customer).Query Using db As New GTGContainer Return db.Customers.Where(Predicate) End Using End Function Public Sub Save(ByVal Entity As Business.Customer) Implements IRepository(Of Business.Customer).Save End Sub End Class End Namespace

    Read the article

  • iOS TableView crash don't know how. Here is the app

    - by jollyr0ger
    Hi! In my app that you can download here: http://ge.tt/2DDqfJa I've started a discussion but is died here iOS TableView crash loading different data The problem is when I back from viewing the YouTube video to the recipes list, the app crash... And when i select a category for the second time, where have to load a tableview with different data source, it crash. This is the crash log Program received signal: “EXC_BAD_ACCESS”. (gdb) bt #0 0x00f0da63 in objc_msgSend () #1 0x04b27ca0 in ?? () #2 0x00002665 in -[RecipesListController viewWillAppear:] (self=0x4b38a00, _cmd=0x6d81a2, animated=1 '\001') at /Users/claudiocanino/Documents/iOS/CottoMangiato/Classes/RecipesListController.m:67 #3 0x00370c9a in -[UINavigationController _startTransition:fromViewController:toViewController:] () #4 0x0036b606 in -[UINavigationController _startDeferredTransitionIfNeeded] () #5 0x0037283e in -[UINavigationController pushViewController:transition:forceImmediate:] () #6 0x04f49549 in -[UINavigationControllerAccessibility(SafeCategory) pushViewController:transition:forceImmediate:] () #7 0x0036b4a0 in -[UINavigationController pushViewController:animated:] () #8 0x00003919 in -[CategoryViewController tableView:didSelectRowAtIndexPath:] (self=0x4b27ca0, _cmd=0x6d19e3, tableView=0x500c200, indexPath=0x4b2d650) at /Users/claudiocanino/Documents/iOS/CottoMangiato/Classes/CategoryViewCotroller.m:104 #9 0x0032a794 in -[UITableView _selectRowAtIndexPath:animated:scrollPosition:notifyDelegate:] () #10 0x00320d50 in -[UITableView _userSelectRowAtPendingSelectionIndexPath:] () #11 0x000337f6 in __NSFireDelayedPerform () #12 0x00d8cfe3 in __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__ () #13 0x00d8e594 in __CFRunLoopDoTimer () #14 0x00ceacc9 in __CFRunLoopRun () #15 0x00cea240 in CFRunLoopRunSpecific () #16 0x00cea161 in CFRunLoopRunInMode () #17 0x016e0268 in GSEventRunModal () #18 0x016e032d in GSEventRun () #19 0x002c342e in UIApplicationMain () #20 0x00001c08 in main (argc=1, argv=0xbfffef58) at /Users/claudiocanino/Documents/iOS/CottoMangiato/main.m:15 Another bt log: (gdb) bt #0 0x00cd76a1 in __CFBasicHashDeallocate () #1 0x00cc2bcb in _CFRelease () #2 0x00002dd6 in -[RecipesListController setRecipesArray:] (self=0x6834d50, _cmd=0x4293, _value=0x4e3bc70) at /Users/claudiocanino/Documents/iOS/CottoMangiato/Classes/RecipesListController.m:16 #3 0x00002665 in -[RecipesListController viewWillAppear:] (self=0x6834d50, _cmd=0x6d81a2, animated=1 '\001') at /Users/claudiocanino/Documents/iOS/CottoMangiato/Classes/RecipesListController.m:67 #4 0x00370c9a in -[UINavigationController _startTransition:fromViewController:toViewController:] () #5 0x0036b606 in -[UINavigationController _startDeferredTransitionIfNeeded] () #6 0x0037283e in -[UINavigationController pushViewController:transition:forceImmediate:] () #7 0x091ac549 in -[UINavigationControllerAccessibility(SafeCategory) pushViewController:transition:forceImmediate:] () #8 0x0036b4a0 in -[UINavigationController pushViewController:animated:] () #9 0x00003919 in -[CategoryViewController tableView:didSelectRowAtIndexPath:] (self=0x4b12970, _cmd=0x6d19e3, tableView=0x5014400, indexPath=0x4b2bd00) at /Users/claudiocanino/Documents/iOS/CottoMangiato/Classes/CategoryViewCotroller.m:104 #10 0x0032a794 in -[UITableView _selectRowAtIndexPath:animated:scrollPosition:notifyDelegate:] () #11 0x00320d50 in -[UITableView _userSelectRowAtPendingSelectionIndexPath:] () #12 0x000337f6 in __NSFireDelayedPerform () #13 0x00d8cfe3 in __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__ () #14 0x00d8e594 in __CFRunLoopDoTimer () #15 0x00ceacc9 in __CFRunLoopRun () #16 0x00cea240 in CFRunLoopRunSpecific () #17 0x00cea161 in CFRunLoopRunInMode () #18 0x016e0268 in GSEventRunModal () #19 0x016e032d in GSEventRun () #20 0x002c342e in UIApplicationMain () #21 0x00001c08 in main (argc=1, argv=0xbfffef58) at /Users/claudiocanino/Documents/iOS/CottoMangiato/main.m:15 Thanks

    Read the article

  • Is it possible to reference remote content from chrome.manifest? (XULRunner)

    - by siemaa
    Hi, I have a xulrunner application and I've been trying to reference remote content from chrome.manifest file. Tt's an application for the company I work in; it's run on a number of computers (most of them are used by other employees as well) as a kind of an internet monitoring service. The problem I'd like to solve is this: updating the code of such application usually requires me to manually copy the modified files to every computer that the application is running on (I've had no luck trying to make automatic updates via xulrunner platform). This process has become very tedious. What I'd like to have is a web server, where all of the xul and js files would be accessible, so that every application could reference them from there. This would require me only to update the code on that server, and the applications (when restarted) would automatically get the latest code. What I managed to do: I can reference js scripts from a xul file using http based urls and everything works fine (I can use local, binary components etc.), although the xul file has to be local - that I'd like to change. But when I write in chrome.manifest a line like: content my_app http://path/to/app/files/ and then use the line in default/preferences/pref.js pref("toolkit.defaultChromeURI", "chrome://my_app/content/my_app.xul"); it just opens a console window (to test I manually run the application with the -console option) and no code gets executed. The file can be downloaded remotely using wget so I guess this isn't the web server issue. The applications work on Windows machines. Is there some kind of security issue causing such behavior or am I doing something wrong? Is it even possible to register remote, http based content as chrome?

    Read the article

  • HP F2180 driver installation fails on 64-bit Windows 7

    - by Noam Gal
    Hello; I am trying to install the HP Deskjet AIO (non-network) driver on my machine, which is running the 64-bit version of Windows 7. Before installing it, Windows detected my printer just fine... But I wanted to use the HP scanning application, because tt allows me to scan several photos at once. I ran the DJ_AIO_NonNetwork_ENU_NB file I got from their site, and the installation went almost without a problem... However, at the part where it should have detected the printer, it didn't, so I skipped it - telling the installer I'll connect the printer later. After it was finished I was able to use it regularly, and also scan using the wanted HP application. However, the installer kept popping at random intervals, and giving me an error message. Yesterday I tried removing all the installed HP Applications, and installing from scratch. Running the same installer setup, it now insists that it does not support my operating system, and that 64-bit Vista is the highest it can go... I just don't understand why this is occuring all of the sudden. Has anybody here successfully installed the AIO driver on the 64-bit version of Windows 7? UPDATE: Been chatting with HP chat support over the weekend. Managed to really mess up my windows. At first, they told me to uninstall using an "unintall_l3" batch file inside their installer package, and then reinstall. Didn't work. Also the "l4" batch didn't make any difference. Afterwards I was told to install "Windows install clean up" and remove many hp entries (most of which were not listed on my computer), and I also removed many other hp entries I bumped upon. Then my office 2k7 started failing. I searched around the web, and ran Security Restore, so now my office works, but my windows explorer is all buggy - can't seem to open windows explorer - it hangs while trying to load my hard drives, or completely ignores them and just shows my libraries. Anyone here has any idea how I can restore my win7 to normal, with or without the annoying scanner? UPDATE 2: Ok - explorer back to normal. I guess I just had to wait until it finishes searching while opening the windows explorer for the first time after the Security Restore. Scanner still not working though.

    Read the article

  • RAID 50 24Port Fast Writes Slow Reads - Ubuntu

    - by James
    What is going on here?! I am baffled. serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo dd if=/dev/zero of=/Volumes/MercuryInternal/test/test.fs bs=4096k count=10000 10000+0 records in 10000+0 records out 41943040000 bytes (42 GB) copied, 57.0948 s, 735 MB/s serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo dd if=/Volumes/MercuryInternal/test/test.fs of=/dev/null bs=4096k count=10000 10000+0 records in 10000+0 records out 41943040000 bytes (42 GB) copied, 116.189 s, 361 MB/s OF NOTE: My RAID50 is 3 sets of 8 disks. - This might not be the best config for SPEED. OS: Ubuntu 12.04.1 x64 Hardware Raid: RocketRaid 2782 - 24 Port Controller HardDriveType: Seagate Barracuda ES.2 1TB Drivers: v1.1 Open Source Linux Drivers. So 24 x 1TB drives, partitioned using parted. Filesystem is ext4. I/O scheduler WAS noop but have changed it to deadline with no seemingly performance benefit/cost. serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo gdisk -l /dev/sdb GPT fdisk (gdisk) version 0.8.1 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdb: 41020686336 sectors, 19.1 TiB Logical sector size: 512 bytes Disk identifier (GUID): 95045EC6-6EAF-4072-9969-AC46A32E38C8 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 41020686302 Partitions will be aligned on 2048-sector boundaries Total free space is 5062589 sectors (2.4 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 41015625727 19.1 TiB 0700 primary To me this should be working fine. I can't think of anything that would be causing this other then fundamental driver errors? I can't seem to get much/if any higher then the 361MB a second, is this hitting the "SATA2" link speed, which it shouldn't given it is a PCIe2.0 card. Or maybe some cacheing quirk - I do have Write Back enabled. Does anyone have any suggestions? Tests for me to perform? Or if you require more information, I am happy to provide it! This is a video fileserver for editing machines, so we have a preference for FAST reads over writes. I was just expected more from RAID 50 and 24 drives together... EDIT: (hdparm results) serveradmin@FILESERVER:/Volumes/MercuryInternal$ sudo hdparm -Tt /dev/sdb /dev/sdb: Timing cached reads: 17458 MB in 2.00 seconds = 8735.50 MB/sec Timing buffered disk reads: 884 MB in 3.00 seconds = 294.32 MB/sec EDIT2: (config details) Also, I am using a RAID block size of 256K. I was told a larger block size is better for larger (in my case large video) files. EDIT3: (Bonnie++ Results. Would love some guidance with this!)

    Read the article

  • Why is domU faster than dom0 on IO?

    - by Paco
    I have installed debian 7 on a physical machine. This is the configuration of the machine: 3 hard drives using RAID 5 Strip element size: 1M Read policy: Adaptive read ahead Write policy: Write Through /boot 200 MB ext2 / 15 GB ext3 SWAP 10GB LVM rest (~500GB) emphasized text I installed postgresql, created a big database (over 1GB). I have an SQL request that takes a lot of time to run (a SELECT statement, so it only reads data from the database). This request takes approximately 5.5 seconds to run. Then, I installed XEN, created a domU, with another debian distro. On this OS, I also installed postgresql, with the same database. The same SQL request takes only 2.5 seconds to run. I checked the kernel on both dom0 and domU. uname-a returns "Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux" on both systems. I checked the kernel parameters, which are approximately the same. For those that are relevant, I changed their values to make them match on both systems using sysctl. I saw no changes (the requests still take the same amount of time). After this, I checked the file systems. I used ext3 on domU. Still no changes. I installed hdparm, and ran hdparm -Tt on both systems, on all my partitions on both systems, and I get similar results. Now, I am stuck, I don't know what is different, and what could be the cause of such a big difference. Additional Info: Debian runs on a Dell server PowerEdge 2950 postgresql: 9.1.9 (both dom0 and domU) xen-linux-system: 3.2.0 xen-hypervisor: 4.1 Thanks EDIT: As Krzysztof Ksiezyk suggested, it might be due to some file caching system. I ran the dd command to test both the read and write speed. Here is domU: root@test1:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 18.8289 s, 107 MB/s root@test1:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 15.0549 s, 134 MB/s And here is dom0: root@debian:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 8.87281 s, 191 MB/s root@debian:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 0.501509 s, 3.4 GB/s What can be the cause of this caching system? And how can we "fix" it? Can we apply it to dom0? EDIT 2: I switched my virtual disk type. To do so I followed this article. I did a dd if=/dev/vg0/test1-disk of=/mnt/test1-disk.img bs=16M Then in /etc/xen/test1.cfg, I changed the disk parameter to use file: instead of phy: it should have removed the file caching, but I still get the same numbers (domU being much faster for Postgres)

    Read the article

  • Using Unity – Part 3

    - by nmarun
    The previous blog was about registering and invoking different types dynamically. In this one I’d like to show how Unity manages/disposes the instances – say hello to Lifetime Managers. When a type gets registered, either through the config file or when RegisterType method is explicitly called, the default behavior is that the container uses a transient lifetime manager. In other words, the unity container creates a new instance of the type when Resolve or ResolveAll method is called. Whereas, when you register an existing object using the RegisterInstance method, the container uses a container controlled lifetime manager - a singleton pattern. It does this by storing the reference of the object and that means so as long as the container is ‘alive’, your registered instance does not go out of scope and will be disposed only after the container either goes out of scope or when the code explicitly disposes the container. Let’s see how we can use these and test if something is a singleton or a transient instance. Continuing on the same solution used in the previous blogs, I have made the following changes: First is to add typeAlias elements for TransientLifetimeManager type: 1: <typeAlias alias="transient" type="Microsoft.Practices.Unity.TransientLifetimeManager, Microsoft.Practices.Unity"/> You then need to tell what type(s) you want to be transient by nature: 1: <type type="IProduct" mapTo="Product2"> 2: <lifetime type="transient" /> 3: </type> 4: <!--<type type="IProduct" mapTo="Product2" />--> The lifetime element’s type attribute matches with the alias attribute of the typeAlias element. Now since ‘transient’ is the default behavior, you can have a concise version of the same as line 4 shows. Also note that I’ve changed the mapTo attribute from ‘Product’ to ‘Product2’. I’ve done this to help understand the transient nature of the instance of the type Product2. By making this change, you are basically saying when a type of IProduct needs to be resolved, Unity should create an instance of Product2 by default. 1: public string WriteProductDetails() 2: { 3: return string.Format("Name: {0}<br/>Category: {1}<br/>Mfg Date: {2}<br/>Hash Code: {3}", 4: Name, Category, MfgDate.ToString("MM/dd/yyyy hh:mm:ss tt"), GetHashCode()); 5: } Again, the above change is purely for the purpose of making the example more clear to understand. The display will show the full date and also displays the hash code of the current instance. The GetHashCode() method returns an integer when an instance gets created – a new integer for every instance. When you run the application, you’ll see something like the below: Now when you click on the ‘Get Product2 Instance’ button, you’ll see that the Mfg Date (which is set in the constructor) and the Hash Code are different from the one created on page load. This proves to us that a new instance is created every single time. To make this a singleton, we need to add a type alias for the ContainerControlledLifetimeManager class and then change the type attribute of the lifetime element to singleton. 1: <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity"/> 2: ... 3: <type type="IProduct" mapTo="Product2"> 4: <lifetime type="singleton" /> 5: </type> Running the application now gets me the following output: Click on the button below and you’ll see that the Mfg Date and the Hash code remain unchanged => the unity container is storing the reference the first time it is created and then returns the same instance every time the type needs to be resolved. Digging more deeper into this, Unity provides more than the two lifetime managers. ExternallyControlledLifetimeManager – maintains a weak reference to type mappings and instances. Unity returns the same instance as long as the some code is holding a strong reference to this instance. For this, you need: 1: <typeAlias alias="external" type="Microsoft.Practices.Unity.ExternallyControlledLifetimeManager, Microsoft.Practices.Unity"/> 2: ... 3: <type type="IProduct" mapTo="Product2"> 4: <lifetime type="external" /> 5: </type> PerThreadLifetimeManager – Unity returns a unique instance of an object for each thread – so this effectively is a singleton behavior on a  per-thread basis. 1: <typeAlias alias="perThread" type="Microsoft.Practices.Unity.PerThreadLifetimeManager, Microsoft.Practices.Unity"/> 2: ... 3: <type type="IProduct" mapTo="Product2"> 4: <lifetime type="perThread" /> 5: </type> One thing to note about this is that if you use RegisterInstance method to register an existing object, this instance will be returned for every thread, making this a purely singleton behavior. Needless to say, this type of lifetime management is useful in multi-threaded applications (duh!!). I hope this blog provided some basics on lifetime management of objects resolved in Unity and in the next blog, I’ll talk about Injection. Please see the code used here.

    Read the article

  • Use Advanced Font Ligatures in Office 2010

    - by Matthew Guay
    Fonts can help your documents stand out and be easier to read, and Office 2010 helps you take your fonts even further with support for OpenType ligatures, stylistic sets, and more.  Here’s a quick look at these new font features in Office 2010. Introduction Starting with Windows 7, Microsoft has made an effort to support more advanced font features across their products.  Windows 7 includes support for advanced OpenType font features and laid the groundwork for advanced font support in programs with the new DirectWrite subsystem.  It also includes the new font Gabriola, which includes an incredible number of beautiful stylistic sets and ligatures. Now, with the upcoming release of Office 2010, Microsoft is bringing advanced typographical features to the Office programs we love.  This includes support for OpenType ligatures, stylistic sets, number forms, contextual alternative characters, and more.  These new features are available in Word, Outlook, and Publisher 2010, and work the same on Windows XP, Vista and Windows 7. Please note that Windows does include several OpenType fonts that include these advanced features.  Calibri, Cambria, Constantia, and Corbel all include multiple number forms, while Consolas, Palatino Linotype, and Gabriola (Windows 7 only) include all the OpenType features.  And, of course, these new features will work great with any other OpenType fonts you have that contain advanced ligatures, stylistic sets, and number forms. Using advanced typography in Word To use the new font features, open a new document, select an OpenType font, and enter some text.  Here we have Word 2010 in Windows 7 with some random text in the Gabriola font.  Click the arrow on the bottom of the Font section of the ribbon to open the font properties. Alternately, select the text and click Font. Now, click on the Advanced tab to see the OpenType features. You can change the ligatures setting… Choose Proportional or Tabular number spacing… And even select Lining or Old-style number forms. Here’s a comparison of Lining and Old-style number forms in Word 2010 with the Calibri font. Finally, you can choose various Stylistic sets for your font.  The dialog always shows 20 styles, whether or not your font includes that many.  Most include only 1 or 2; Gabriola includes 6. Here’s lorem ipsum text, using the Gabriola font with Stylistic set 6. Impressive, huh?  The font ligatures change based on context, so they will automatically change as you are typing.  Watch the transition as we typed the word Microsoft in Word with Gabriola stylistic set 6. Here’s another example, showing the fi and tt ligatures in Calibri. These effects work great in Word 2010 in XP, too. And, since Outlook uses Word as it’s editing engine, you can use the same options in Outlook 2010.  Note that these font effects may not show up the same if the recipient’s email client doesn’t support advanced OpenType typography.  It will, of course, display perfectly if the recipient is using Outlook 2010. Using advanced typography in Publisher 2010 Publisher 2010 includes the same advanced font features.  This is especially nice for those using Publisher for professional layout and design.  Simply insert a text box, enter some text, select it, and click the arrow on the bottom of the font box as in Word to open the font properties. This font options dialog is actually more advanced than Word’s font options.  You can preview your font changes on sample text right in the properties box.  You can also choose to add or remove a swash from your characters.   Conclusion Advanced typographical effects are a welcome addition to Word and Publisher 2010, and they are very impressive when coupled with modern fonts such as Gabriola.  From designing elegant headers to using old-style numbers, these features are very useful and fun. Do you have a favorite OpenType font that includes advanced typographical features?  Let us know in the comments! More Reading Advances in typography in Windows 7 – Engineering 7 Blog New features in Microsoft Word 2010 Similar Articles Productive Geek Tips Change the Default Font in Excel 2007Ask the Readers: Do You Use a Laptop, Desktop, or Both?Keep Websites From Using Tiny Fonts in SafariAdd or Remove Apps from the Microsoft Office 2007 or 2010 SuiteFriday Fun: Desktop Tower Defense Pro TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 2

    - by rajbk
    We continue building our report in this three part series. Creating an ASP.NET report using Visual Studio 2010 - Part 1 Creating an ASP.NET report using Visual Studio 2010 - Part 3 Creating the Client Report Definition file (RDLC) Add a folder called “RDLC”. This will hold our RDLC report.   Right click on the RDLC folder, select “Add new item..” and add an “RDLC” name of “Products”. We will use the “Report Wizard” to walk us through the steps of creating the RDLC.   In the next dialog, give the dataset a name called “ProductDataSet”. Change the data source to “NorthwindReports.DAL” and select “ProductRepository(GetProductsProjected)”. The fields that are returned from the method are shown on the right. Click next.   Drag and drop the ProductName, CategoryName, UnitPrice and Discontinued into the Values container. Note that you can create much more complex grouping using this UI. Click Next.   Most of the selections on this screen are grayed out because we did not choose a grouping in the previous screen. Click next. Choose a style for your report. Click next. The report graphic design surface is now visible. Right click on the report and add a page header and page footer. With the report design surface active, drag and drop a TextBox from the tool box to the page header. Drag one more textbox to the page header. We will use the text boxes to add some header text as shown in the next figure. You can change the font size and other properties of the textboxes using the formatting tool bar (marked in red). You can also resize the columns by moving your cursor in between columns and dragging. Adding Expressions Add two more text boxes to the page footer. We will use these to add the time the report was generated and page numbers. Right click on the first textbox in the page footer and select “Expression”. Add the following expression for the print date (note the = sign at the left of the expression in the dialog below) "© Northwind Traders " & Format(Now(),"MM/dd/yyyy hh:mm tt") Right click on the second text box and add the following for the page count.   Globals.PageNumber & " of " & Globals.TotalPages Formatting the page footer is complete.   We are now going to format the “Unit Price” column so it displays the number in currency format.  Right click on the [UnitPrice] column (not header) and select “Text Box Properties..” Under “Number”, select “Currency”. Hit OK. Adding a chart With the design surface active, go to the toolbox and drag and drop a chart control. You will need to move the product list table down first to make space for the chart contorl. The document can also be resized by dragging on the corner or at the page header/footer separator. In the next dialog, pick the first chart type. This can be changed later if needed. Click OK. The chart gets added to the design surface.   Click on the blue bars in the chart (not legend). This will bring up drop locations for dropping the fields. Drag and drop the UnitPrice and CategoryName into the top (y axis) and bottom (x axis) as shown below. This will give us the total unit prices for a given category. That is the best I could come up with as far as what report to render, sorry :-) Delete the legend area to get more screen estate. Resize the chart to your liking. Change the header, x axis and y axis text by double clicking on those areas. We made it this far. Let’s impress the client by adding a gradient to the bar graph :-) Right click on the blue bar and select “Series properties”. Under “Fill”, add a color and secondary color and select the Gradient style. We are done designing our report. In the next section you will see how to add the report to the report viewer control, bind to the data and make it refresh when the filter criteria are changed.   Creating an ASP.NET report using Visual Studio 2010 - Part 3

    Read the article

  • Why won't USB 3.0 external hard drive run at USB 3.0 speeds?

    - by jgottula
    I recently purchased a PCI Express x1 USB 3.0 controller card (containing the NEC USB 3.0 controller) with the intent of using a USB 3.0 external hard drive with my Linux box. I installed the card in an empty PCIe slot on my motherboard, connected the card to a power cable, strung a USB 3.0 cable between one of the new ports and my external HDD, and connected the HDD to a wall socket for power. Booting the system, the drive works 100% as intended, with the one exception of throughput: rather than using SuperSpeed 4.8 Gbps connectivity, it seems to be falling back to High Speed 480 Mbps USB 2.0-style throughput. Disk Utility shows it as a 480 Mbps device, and running a couple Disk Utility and dd benchmarks confirms that the drive fails to exceed ~40 MB/s (the approximate limit of USB 2.0), despite it being an SSD capable of far more than that. When I connect my USB 3.0 HDD, dmesg shows this: [ 3923.280018] usb 3-2: new high speed USB device using ehci_hcd and address 6 where I would expect to find this: [ 3923.280018] usb 3-2: new SuperSpeed USB device using xhci_hcd and address 6 My system was running on kernel 2.6.35-25-generic at the time. Then, I stumbled upon this forum thread by an individual who found that a bug, which was present in kernels prior to 2.6.37-rc5, could be the culprit for this type of problem. Consequently, I installed the 2.6.37-generic mainline Ubuntu kernel to determine if the problem would go away. It didn't, so I tried 2.6.38-rc3-generic, and even the 2.6.38 nightly from 2010.02.01, to no avail. In short, I'm trying to determine why, with USB 3.0 support in the kernel, my USB 3.0 drive fails to run at full SuperSpeed throughput. See the comments under this question for additional details. Output that might be relevant to the problem (when booting from 2.6.38-rc3): Relevant lines from dmesg: [ 19.589491] xhci_hcd 0000:03:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 19.589512] xhci_hcd 0000:03:00.0: setting latency timer to 64 [ 19.589516] xhci_hcd 0000:03:00.0: xHCI Host Controller [ 19.589623] xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 12 [ 19.650492] xhci_hcd 0000:03:00.0: irq 17, io mem 0xf8100000 [ 19.650556] xhci_hcd 0000:03:00.0: irq 47 for MSI/MSI-X [ 19.650560] xhci_hcd 0000:03:00.0: irq 48 for MSI/MSI-X [ 19.650563] xhci_hcd 0000:03:00.0: irq 49 for MSI/MSI-X [ 19.653946] xHCI xhci_add_endpoint called for root hub [ 19.653948] xHCI xhci_check_bandwidth called for root hub Relevant section of sudo lspci -v: 03:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) (prog-if 30) Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f8100000 (64-bit, non-prefetchable) [size=8K] Capabilities: [50] Power Management version 3 Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [90] MSI-X: Enable+ Count=8 Masked- Capabilities: [a0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [140] Device Serial Number ff-ff-ff-ff-ff-ff-ff-ff Capabilities: [150] #18 Kernel driver in use: xhci_hcd Kernel modules: xhci-hcd Relevant section of sudo lsusb -v: Bus 012 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 3 bMaxPacketSize0 9 idVendor 0x1d6b Linux Foundation idProduct 0x0003 3.0 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.38-020638rc3-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:03:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection TT think time 8 FS bits bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Device Status: 0x0003 Self Powered Remote Wakeup Enabled Full, non-verbose lsusb: Bus 012 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 011 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 010 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 009 Device 003: ID 04d9:0702 Holtek Semiconductor, Inc. Bus 009 Device 002: ID 046d:c068 Logitech, Inc. G500 Laser Mouse Bus 009 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 006: ID 174c:5106 ASMedia Technology Inc. Bus 003 Device 004: ID 0bda:0151 Realtek Semiconductor Corp. Mass Storage Device (Multicard Reader) Bus 003 Device 002: ID 058f:6366 Alcor Micro Corp. Multi Flash Reader Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 006: ID 1687:0163 Kingmax Digital Inc. Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 046d:081b Logitech, Inc. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Full output: full dmesg full lspci full lsusb

    Read the article

  • jquery masked input with asp.net control problem

    - by Eyla
    Hi, I have problem while using jquery maskedinput with asp.net textbox. I have a check box that when I check will set a mask to the textbox and when uncheck it change the mask. the problem that when the focus is lost before the mask completed to be filled the text box will empty. how can I fix the problem??? here is my code: <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> <script src="js/jquery-1.4.1.js" type="text/javascript"></script> <script src="js/jquery.maskedinput-1.2.2.js" type="text/javascript"></script> <script type="text/javascript"> function mycheck() { if ($('#<%=chk.ClientID %>').is(':checked')) { $("#<%=txt.ClientID %>").unmask(); $("#<%=txt.ClientID %>").val(""); $("#<%=txt.ClientID %>").mask("999999999999"); } else { $("#<%=txt.ClientID %>").unmask(); $("#<%=txt.ClientID %>").val(""); $("#<%=txt.ClientID %>").mask("(999)999-9999"); } } </script> <style type="text/css"> #form1 { margin-top: 0px; } </style> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server" /> <div> <p> <asp:CheckBox ID="chk" runat="server" CssClass="kk" onclick="mycheck()" /> </p> <p> <asp:TextBox ID="txt" runat="server" CssClass="tt" ></asp:TextBox> </p> </div> </form> </body> </html>

    Read the article

  • how to use kml file in my code..

    - by zjm1126
    i download a kml file : <?xml version="1.0" encoding="UTF-8"?> <kml xmlns="http://www.opengis.net/kml/2.2"> <Document> <Style id="transGreenPoly"> <LineStyle> <width>1.5</width> </LineStyle> <PolyStyle> <color>7d00ff00</color> </PolyStyle> </Style> <Style id="transYellowPoly"> <LineStyle> <width>1.5</width> </LineStyle> <PolyStyle> <color>7d00ffff</color> </PolyStyle> </Style> <Style id="transRedPoly"> <LineStyle> <width>1.5</width> </LineStyle> <PolyStyle> <color>7d0000ff</color> </PolyStyle> </Style> <Style id="transBluePoly"> <LineStyle> <width>1.5</width> </LineStyle> <PolyStyle> <color>7dff0000</color> </PolyStyle> </Style> <Folder> <name>Placemarks</name> <open>0</open> <Placemark> <name>Simple placemark</name> <description>Attached to the ground. Intelligently places itself at the height of the underlying terrain.</description> <Point> <coordinates>-122.0822035425683,37.42228990140251,0</coordinates> </Point> </Placemark> <Placemark> <name>Descriptive HTML</name> <description><![CDATA[Click on the blue link!<br/><br/> Placemark descriptions can be enriched by using many standard HTML tags.<br/> For example: <hr/> Styles:<br/> <i>Italics</i>, <b>Bold</b>, <u>Underlined</u>, <s>Strike Out</s>, subscript<sub>subscript</sub>, superscript<sup>superscript</sup>, <big>Big</big>, <small>Small</small>, <tt>Typewriter</tt>, <em>Emphasized</em>, <strong>Strong</strong>, <code>Code</code> <hr/> Fonts:<br/> <font color="red">red by name</font>, <font color="#408010">leaf green by hexadecimal RGB</font> <br/> <font size=1>size 1</font>, <font size=2>size 2</font>, <font size=3>size 3</font>, <font size=4>size 4</font>, <font size=5>size 5</font>, <font size=6>size 6</font>, <font size=7>size 7</font> <br/> <font face=times>Times</font>, <font face=verdana>Verdana</font>, <font face=arial>Arial</font><br/> <hr/> Links: <br/> <a href="http://earth.google.com/">Google Earth!</a> <br/> or: Check out our website at www.google.com <hr/> Alignment:<br/> <p align=left>left</p> <p align=center>center</p> <p align=right>right</p> <hr/> Ordered Lists:<br/> <ol><li>First</li><li>Second</li><li>Third</li></ol> <ol type="a"><li>First</li><li>Second</li><li>Third</li></ol> <ol type="A"><li>First</li><li>Second</li><li>Third</li></ol> <hr/> Unordered Lists:<br/> <ul><li>A</li><li>B</li><li>C</li></ul> <ul type="circle"><li>A</li><li>B</li><li>C</li></ul> <ul type="square"><li>A</li><li>B</li><li>C</li></ul> <hr/> Definitions:<br/> <dl> <dt>Google:</dt><dd>The best thing since sliced bread</dd> </dl> <hr/> Centered:<br/><center> Time present and time past<br/> Are both perhaps present in time future,<br/> And time future contained in time past.<br/> If all time is eternally present<br/> All time is unredeemable.<br/> </center> <hr/> Block Quote: <br/> <blockquote> We shall not cease from exploration<br/> And the end of all our exploring<br/> Will be to arrive where we started<br/> And know the place for the first time.<br/> <i>-- T.S. Eliot</i> </blockquote> <br/> <hr/> Headings:<br/> <h1>Header 1</h1> <h2>Header 2</h2> <h3>Header 3</h3> <h3>Header 4</h4> <h3>Header 5</h5> <hr/> Images:<br/> <i>Remote image</i><br/> <img src="http://code.google.com/apis/kml/documentation/googleSample.png"><br/> <i>Scaled image</i><br/> <img src="http://code.google.com/apis/kml/documentation/googleSample.png" width=100><br/> <hr/> Simple Tables:<br/> <table border="1" padding="1"> <tr><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td></tr> <tr><td>a</td><td>b</td><td>c</td><td>d</td><td>e</td></tr> </table> <br/>]]></description> <Point> <coordinates>-122,37,0</coordinates> </Point> </Placemark> </Folder> <Folder> <name>Google Campus - Polygons</name> <open>0</open> <description>A collection showing how easy it is to create 3-dimensional buildings</description> <Placemark> <name>Building 40</name> <styleUrl>#transRedPoly</styleUrl> <Polygon> <extrude>1</extrude> <altitudeMode>relativeToGround</altitudeMode> <outerBoundaryIs> <LinearRing> <coordinates> -122.0848938459612,37.42257124044786,17 -122.0849580979198,37.42211922626856,17 -122.0847469573047,37.42207183952619,17 -122.0845725380962,37.42209006729676,17 -122.0845954886723,37.42215932700895,17 -122.0838521118269,37.42227278564371,17 -122.083792243335,37.42203539112084,17 -122.0835076656616,37.42209006957106,17 -122.0834709464152,37.42200987395161,17 -122.0831221085748,37.4221046494946,17 -122.0829247374572,37.42226503990386,17 -122.0829339169385,37.42231242843094,17 -122.0833837359737,37.42225046087618,17 -122.0833607854248,37.42234159228745,17 -122.0834204551642,37.42237075460644,17 -122.083659133885,37.42251292011001,17 -122.0839758438952,37.42265873093781,17 -122.0842374743331,37.42265143972521,17 -122.0845036949503,37.4226514386435,17 -122.0848020460801,37.42261133916315,17 -122.0847882750515,37.42256395055121,17 -122.0848938459612,37.42257124044786,17 </coordinates> </LinearRing> </outerBoundaryIs> </Polygon> </Placemark> <Placemark> <name>Building 41</name> <styleUrl>#transBluePoly</styleUrl> <Polygon> <extrude>1</extrude> <altitudeMode>relativeToGround</altitudeMode> <outerBoundaryIs> <LinearRing> <coordinates> -122.0857412771483,37.42227033155257,17 -122.0858169768481,37.42231408832346,17 -122.085852582875,37.42230337469744,17 -122.0858799945639,37.42225686138789,17 -122.0858860101409,37.4222311076138,17 -122.0858069157288,37.42220250173855,17 -122.0858379542653,37.42214027058678,17 -122.0856732640519,37.42208690214408,17 -122.0856022926407,37.42214885429042,17 -122.0855902778436,37.422128290487,17 -122.0855841672237,37.42208171967246,17 -122.0854852065741,37.42210455874995,17 -122.0855067264352,37.42214267949824,17 -122.0854430712915,37.42212783846172,17 -122.0850990714904,37.42251282407603,17 -122.0856769818632,37.42281815323651,17 -122.0860162273783,37.42244918858723,17 -122.0857260327004,37.42229239604253,17 -122.0857412771483,37.42227033155257,17 </coordinates> </LinearRing> </outerBoundaryIs> </Polygon> </Placemark> <Placemark> <name>Building 42</name> <styleUrl>#transGreenPoly</styleUrl> <Polygon> <extrude>1</extrude> <altitudeMode>relativeToGround</altitudeMode> <outerBoundaryIs> <LinearRing> <coordinates> -122.0857862287242,37.42136208886969,25 -122.0857312990603,37.42136935989481,25 -122.0857312992918,37.42140934910903,25 -122.0856077073679,37.42138390166565,25 -122.0855802426516,37.42137299550869,25 -122.0852186221971,37.42137299504316,25 -122.0852277765639,37.42161656508265,25 -122.0852598189347,37.42160565894403,25 -122.0852598185499,37.42168200156,25 -122.0852369311478,37.42170017860346,25 -122.0852643957828,37.42176197982575,25 -122.0853239032746,37.42176198013907,25 -122.0853559454324,37.421852864452,25 -122.0854108752463,37.42188921823734,25 -122.0854795379357,37.42189285337048,25 -122.0855436229819,37.42188921797546,25 -122.0856260178042,37.42186013499926,25 -122.085937287963,37.42186013453605,25 -122.0859428718666,37.42160898590042,25 -122.0859655469861,37.42157992759144,25 -122.0858640462341,37.42147115002957,25 -122.0858548911215,37.42140571326184,25 -122.0858091162768,37.4214057134039,25 -122.0857862287242,37.42136208886969,25 </coordinates> </LinearRing> </outerBoundaryIs> </Polygon> </Placemark> <Placemark> <name>Building 43</name> <styleUrl>#transYellowPoly</styleUrl> <Polygon> <extrude>1</extrude> <altitudeMode>relativeToGround</altitudeMode> <outerBoundaryIs> <LinearRing> <coordinates> -122.0844371128284,37.42177253003091,19 -122.0845118855746,37.42191111542896,19 -122.0850470999805,37.42178755121535,19 -122.0850719913391,37.42143663023161,19 -122.084916406232,37.42137237822116,19 -122.0842193868167,37.42137237801626,19 -122.08421938659,37.42147617161496,19 -122.0838086419991,37.4214613409357,19 -122.0837899728564,37.42131306410796,19 -122.0832796534698,37.42129328840593,19 -122.0832609819207,37.42139213944298,19 -122.0829373621737,37.42137236399876,19 -122.0829062425667,37.42151569778871,19 -122.0828502269665,37.42176282576465,19 -122.0829435788635,37.42176776969635,19 -122.083217411188,37.42179248552686,19 -122.0835970430103,37.4217480074456,19 -122.0839455556771,37.42169364237603,19 -122.0840077894637,37.42176283815853,19 -122.084113587521,37.42174801104392,19 -122.0840762473784,37.42171341292375,19 -122.0841447047739,37.42167881534569,19 -122.084144704223,37.42181720660197,19 -122.0842503333074,37.4218170700446,19 -122.0844371128284,37.42177253003091,19 </coordinates> </LinearRing> </outerBoundaryIs> </Polygon> </Placemark> </Folder> <Folder> <name>LineString</name> <open>0</open> <Placemark> <LineString> <tessellate>1</tessellate> <coordinates> -112.0814237830345,36.10677870477137,0 -112.0870267752693,36.0905099328766,0 </coordinates> </LineString> </Placemark> </Folder> <Folder> <name>GroundOverlay</name> <open>0</open> <GroundOverlay> <name>Large-scale overlay on terrain</name> <description>Overlay shows Mount Etna erupting on July 13th, 2001.</description> <Icon> <href>http://code.google.com/apis/kml/documentation/etna.jpg</href> </Icon> <LatLonBox> <north>37.91904192681665</north> <south>37.46543388598137</south> <east>15.35832653742206</east> <west>14.60128369746704</west> </LatLonBox> </GroundOverlay> </Folder> <Folder> <name>ScreenOverlays</name> <open>0</open> <ScreenOverlay> <name>screenoverlay_dynamic_top</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/dynamic_screenoverlay.jpg</href> </Icon> <overlayXY x="0" y="1" xunits="fraction" yunits="fraction"/> <screenXY x="0" y="1" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="1" y="0.2" xunits="fraction" yunits="fraction"/> </ScreenOverlay> <ScreenOverlay> <name>screenoverlay_dynamic_right</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/dynamic_right.jpg</href> </Icon> <overlayXY x="1" y="1" xunits="fraction" yunits="fraction"/> <screenXY x="1" y="1" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="0" y="1" xunits="fraction" yunits="fraction"/> </ScreenOverlay> <ScreenOverlay> <name>Simple crosshairs</name> <visibility>0</visibility> <description>This screen overlay uses fractional positioning to put the image in the exact center of the screen</description> <Icon> <href>http://code.google.com/apis/kml/documentation/crosshairs.png</href> </Icon> <overlayXY x="0.5" y="0.5" xunits="fraction" yunits="fraction"/> <screenXY x="0.5" y="0.5" xunits="fraction" yunits="fraction"/> <rotationXY x="0.5" y="0.5" xunits="fraction" yunits="fraction"/> <size x="0" y="0" xunits="pixels" yunits="pixels"/> </ScreenOverlay> <ScreenOverlay> <name>screenoverlay_absolute_topright</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/top_right.jpg</href> </Icon> <overlayXY x="1" y="1" xunits="fraction" yunits="fraction"/> <screenXY x="1" y="1" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="0" y="0" xunits="fraction" yunits="fraction"/> </ScreenOverlay> <ScreenOverlay> <name>screenoverlay_absolute_topleft</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/top_left.jpg</href> </Icon> <overlayXY x="0" y="1" xunits="fraction" yunits="fraction"/> <screenXY x="0" y="1" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="0" y="0" xunits="fraction" yunits="fraction"/> </ScreenOverlay> <ScreenOverlay> <name>screenoverlay_absolute_bottomright</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/bottom_right.jpg</href> </Icon> <overlayXY x="1" y="-1" xunits="fraction" yunits="fraction"/> <screenXY x="1" y="0" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="0" y="0" xunits="fraction" yunits="fraction"/> </ScreenOverlay> <ScreenOverlay> <name>screenoverlay_absolute_bottomleft</name> <visibility>0</visibility> <Icon> <href>http://code.google.com/apis/kml/documentation/bottom_left.jpg</href> </Icon> <overlayXY x="0" y="-1" xunits="fraction" yunits="fraction"/> <screenXY x="0" y="0" xunits="fraction" yunits="fraction"/> <rotationXY x="0" y="0" xunits="fraction" yunits="fraction"/> <size x="0" y="0" xunits="fraction" yunits="fraction"/> </ScreenOverlay> </Folder> </Document> </kml> and my code is : function initialize() { if (GBrowserIsCompatible()) { var map = new GMap2(document.getElementById("map_canvas")); var center=new GLatLng(39.9493, 116.3975); map.setCenter(center, 13); var geoXml = new GGeoXml("SamplesInMaps.kml"); <!--Place KML on Map --> map.addOverlay(geoXml); } } but ,i don't successful ,, do you know how to do this.. thanks

    Read the article

  • Parsing a .NET DataSet returned from a .NET Web Service in Java

    - by Chris Dail
    I have to consume a .NET hosted web service from a Java application. Interoperability between the two is usually very good. The problem I'm running into is that the .NET application developer chose to expose data using the .NET DataSet object. There are lots of articles written as to why you should not do this and how it makes interoperability difficult: http://www.hanselman.com/blog/ReturningDataSetsFromWebServicesIsTheSpawnOfSatanAndRepresentsAllThatIsTrulyEvilInTheWorld.aspx http://www.lhotka.net/weblog/ThoughtsOnPassingDataSetObjectsViaWebServices.aspx http://aspnet.4guysfromrolla.com/articles/051805-1.aspx http://www.theserverside.net/tt/articles/showarticle.tss?id=Top5WSMistakes My problem is that despite this not being recommended practice, I am stuck with having to consume a web service returning a DataSet with Java. When you generate a proxy for something like this with anything other than .NET you basically end up with an object that looks like this: @XmlElement(namespace = "http://www.w3.org/2001/XMLSchema", required = true) protected Schema schema; @XmlAnyElement(lax = true) protected Object any; This first field is the actual schema that should describe the DataSet. When I process this using JAX-WS and JAXB in Java, it bring all of XS-Schema in as Java objects to be represented here. Walking the object tree of JAXB is possible but not pretty. The any field represents the raw XML for the DataSet that is in the schema specified by the schema. The structure of the dataset is pretty consistent but the data types do change. I need access to the type information and the schema does vary from call to call. I've though of a few options but none seem like 'good' options. Trying to generate Java objects from the schema using JAXB at runtime seems to be a bad idea. This would be way too slow since it would need to happen everytime. Brute force walk the schema tree using the JAXB objects that JAX-WS brought in. Maybe instead of using JAXB to parse the schema it would be easier to deal with it as XML and use XPath to try and find the type information I need. Are there other options I have not considered? Is there a Java library to parse DataSet objects easily? What have other people done who may have similar situations?

    Read the article

  • Query an XmlDocument without getting a 'Namespace prefix is not defined' problem

    - by Dan Revell
    I've got an Xml document that both defines and references some namespaces. I load it into an XmlDocument object and to the best of my knowledge I create a XmlNamespaceManager object with which to query Xpath against. Problem is I'm getting XPath exceptions that the namespace "my" is not defined. How do I get the namespace manager to see that the namespaces I am referencing are already defined. Or rather how do I get the namespace definitions from the document to the namespace manager. Furthermore tt strikes me as strange that you have to provide a namespace manager to the document which you create from the documents nametable in the first place. Even if you need to hardcode manual namespaces why can't you add them directly to the document. Why do you always have to pass this namespace manager with every single query? What can't XmlDocument just know? The Code: XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(programFiles + @"Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\FEATURES\HfscBookingWorkflow\template.xml"); XmlNamespaceManager ns = new XmlNamespaceManager(xmlDoc.NameTable); XmlNode referenceNode = xmlDoc.SelectSingleNode("/my:myFields/my:ReferenceNumber", ns); referenceNode.InnerXml = this.bookingData.ReferenceNumber; XmlNode titleNode = xmlDoc.SelectSingleNode("/my:myFields/my:Title", ns); titleNode.InnerXml = this.bookingData.FamilyName; ... The Xml: <?xml version="1.0" encoding="UTF-8" ?> <?mso-infoPathSolution name="urn:schemas-microsoft-com:office:infopath:Inspection:-myXSD-2010-01-15T18-21-55" solutionVersion="1.0.0.104" productVersion="12.0.0" PIVersion="1.0.0.0" ?> <?mso-application progid="InfoPath.Document" versionProgid="InfoPath.Document.2"?> - <my:myFields xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xhtml="http://www.w3.org/1999/xhtml" xmlns:my="http://schemas.microsoft.com/office/infopath/2003/myXSD/2010-01-15T18:21:55" xmlns:xd="http://schemas.microsoft.com/office/infopath/2003"> <my:DateRequested xsi:nil="true" /> <my:DateVisited xsi:nil="true" /> <my:ReferenceNumber /> <my:FireCall>false</my:FireCall> ...

    Read the article

  • WCF web service: response is 200/ok, but response body is empty

    - by user1021224
    I am creating a WCF web api service. My problem is that some methods return a 200/OK response, but the headers and the body are empty. In setting up my web service, I created an ADO.NET Entity Data Model. I chose ADO.NET DbContext Generator when I added a code generation item. In the Model.tt document, I changed HashSet and ICollection to List. I built my website. It used to be that when I coded a method to return a List of an entity (like List<Customer> or List<Employee> in the Northwind database), it worked fine. Over time, I could not return a List of any of those, and could only grab one entity. Now, it's gotten to a point where I can return a List<string> or List<int>, but not a List or an instance of any entity. When I try to get a List<AnyEntity>, the response is 200/OK, but the response headers and body are empty. I have tried using the debugger and Firefox's Web Console. Using FF's WC, I could only get an "undefined" status code. I am not sure where to go from here. EDIT: In trying to grab all Areas from the database, I do this: [WebGet(UriTemplate = "areas")] public List<a1Areas> AllAreas() { return context.a1Areas.ToList(); } I would appreciate any more methods for debugging this. Thanks in advance. Found the answer, thanks to Merlyn! In my Global.asax file, I forgot to comment out two lines that took care of proxies and disposing of my context object. The code is below: void Application_BeginRequest(object sender, EventArgs e) { var context = new AssignmentEntities(); context.Configuration.ProxyCreationEnabled = false; HttpContext.Current.Items["_context"] = context; } void Application_EndRequest(object sender, EventArgs e) { var context = HttpContext.Current.Items["_context"] as AssignmentEntities; if (context != null) { context.Dispose(); } }

    Read the article

  • perl dancer: passing database info to template

    - by Bubnoff
    Following Dancer tutorial here: http://search.cpan.org/dist/Dancer/lib/Dancer/Tutorial.pod I'm using my own sqlite3 database with this schema CREATE TABLE if not exists location (location_code TEXT PRIMARY KEY, name TEXT, stations INTEGER); CREATE TABLE if not exists session (id INTEGER PRIMARY KEY, date TEXT, sessions INTEGER, location_code TEXT, FOREIGN KEY(location_code) REFERENCES location(location_code)); My dancer code ( helloWorld.pm ) for the database: package helloWorld; use Dancer; use DBI; use File::Spec; use File::Slurp; use Template; our $VERSION = '0.1'; set 'template' => 'template_toolkit'; set 'logger' => 'console'; my $base_dir = qq(/home/automation/scripts/Area51/perl/dancer); # database crap sub connect_db { my $db = qw(/home/automation/scripts/Area51/perl/dancer/sessions.sqlite); my $dbh = DBI->connect("dbi:SQLite:dbname=$db", "", "", { RaiseError => 1, AutoCommit => 1 }); return $dbh; } sub init_db { my $db = connect_db(); my $file = qq($base_dir/schema.sql); my $schema = read_file($file); $db->do($schema) or die $db->errstr; } get '/' => sub { my $branch_code = qq(BPT); my $dbh = connect_db(); my $sql = q(SELECT * FROM session); my $sth = $dbh->prepare($sql) or die $dbh->errstr; $sth->execute or die $dbh->errstr; my $key_field = q(id); template 'show_entries.tt', { 'branch' => $branch_code, 'data' => $sth->fetchall_hashref($key_field), }; }; init_db(); true; Tried the example template on the site, doesn't work. <% FOREACH id IN data.keys.nsort %> <li>Date is: <% data.$id.sessions %> </li> <% END %> Produces page but with no data. How do I troubleshoot this as no clues come up in the console/cli? Thanks Bubnoff

    Read the article

  • Extending Enums, Overkill?

    - by CkH
    I have an object that needs to be serialized to an EDI format. For this example we'll say it's a car. A car might not be the best example b/c options change over time, but for the real object the Enums will never change. I have many Enums like the following with custom attributes applied. public enum RoofStyle { [DisplayText("Glass Top")] [StringValue("GTR")] Glass, [DisplayText("Convertible Soft Top")] [StringValue("CST")] ConvertibleSoft, [DisplayText("Hard Top")] [StringValue("HT ")] HardTop, [DisplayText("Targa Top")] [StringValue("TT ")] Targa, } The Attributes are accessed via Extension methods: public static string GetStringValue(this Enum value) { // Get the type Type type = value.GetType(); // Get fieldinfo for this type FieldInfo fieldInfo = type.GetField(value.ToString()); // Get the stringvalue attributes StringValueAttribute[] attribs = fieldInfo.GetCustomAttributes( typeof(StringValueAttribute), false) as StringValueAttribute[]; // Return the first if there was a match. return attribs.Length > 0 ? attribs[0].StringValue : null; } public static string GetDisplayText(this Enum value) { // Get the type Type type = value.GetType(); // Get fieldinfo for this type FieldInfo fieldInfo = type.GetField(value.ToString()); // Get the DisplayText attributes DisplayTextAttribute[] attribs = fieldInfo.GetCustomAttributes( typeof(DisplayTextAttribute), false) as DisplayTextAttribute[]; // Return the first if there was a match. return attribs.Length > 0 ? attribs[0].DisplayText : value.ToString(); } There is a custom EDI serializer that serializes based on the StringValue attributes like so: StringBuilder sb = new StringBuilder(); sb.Append(car.RoofStyle.GetStringValue()); sb.Append(car.TireSize.GetStringValue()); sb.Append(car.Model.GetStringValue()); ... There is another method that can get Enum Value from StringValue for Deserialization: car.RoofStyle = Enums.GetCode<RoofStyle>(EDIString.Substring(4, 3)) Defined as: public static class Enums { public static T GetCode<T>(string value) { foreach (object o in System.Enum.GetValues(typeof(T))) { if (((Enum)o).GetStringValue() == value.ToUpper()) return (T)o; } throw new ArgumentException("No code exists for type " + typeof(T).ToString() + " corresponding to value of " + value); } } And Finally, for the UI, the GetDisplayText() is used to show the user friendly text. What do you think? Overkill? Is there a better way? or Goldie Locks (just right)? Just want to get feedback before I intergrate it into my personal framework permanently. Thanks.

    Read the article

  • JQuery Menu plugins under ASP.NET MVC seem to only work in Chrome, but not in IE & FireFox

    - by Antony
    Recently, I was trying to prototype some jQuery-based menu into ASP.NET MVC. Just to name two examples here: plugins.jquery.com/project/columnview www.filamentgroup.com/lab/jquery_ipod_style_and_flyout_menus/ Their demo page looks great, but when I integrate their sample code into MVC, the script no longer works in IE and FireFox, but it seems to work just fine under Google Chrome. Can someone kindly enough to point out what I missed? I will be honest here. I am still new to JavaScript, so it is still a learning phase to me, so any help is highly appreciated. I have placed a copy of my VS2010 solution zip file @ http://db.tt/0UNDkN Here is what I did. In the Site.Master, I have something like <body> <div class="page">{truncated...}</div> <script src="http://code.jquery.com/jquery-1.4.2.min.js" type="text/javascript" charset="utf-8"></script> <asp:ContentPlaceHolder ID="ScriptContent" runat="server" /> </body> And inside View file, I have the following <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <div id="original"> {some demo block, copied from javascript demo} </div> </asp:Content> <asp:Content ID="Content3" ContentPlaceHolderID="ScriptContent" runat="server"> <script type="text/javascript" src="<%= Url.Content("~/Scripts/jquery.columnview.js") %>" /> <script type="text/javascript"> $(document).ready(function () { $('#original').columnview(); }); </script> </asp:Content> Compiled the code and ran it under IE. Ideally, it should work like the demo in www.christianyates.com/blog/jquery/finder-column-view-hierarchical-lists-jquery, but in reality, it only displays unordered list in plain view. (If you download the solution file and run it, you should be able to repro this as well). Next, tried with FireFox, not working either, same result as IE. Finally, when I try it under Google Chrome 4.1 (lastest version), and the script displays just fine. Really puzzling here :-/ Thank you for reading :D

    Read the article

  • Using Radio Button in GridView with Validation

    - by Vincent Maverick Durano
    A developer is asking how to select one radio button at a time if the radio button is inside the GridView.  As you may know setting the group name attribute of radio button will not work if the radio button is located within a Data Representation control like GridView. This because the radio button inside the gridview bahaves differentely. Since a gridview is rendered as table element , at run time it will assign different "name" to each radio button. Hence you are able to select multiple rows. In this post I'm going to demonstrate how select one radio button at a time in gridview and add a simple validation on it. To get started let's go ahead and fire up visual studio and the create a new web application / website project. Add a WebForm and then add gridview. The mark up would look something like this: <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="false" > <Columns> <asp:TemplateField> <ItemTemplate> <asp:RadioButton ID="rb" runat="server" /> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField="RowNumber" HeaderText="Row Number" /> <asp:BoundField DataField="Col1" HeaderText="First Column" /> <asp:BoundField DataField="Col2" HeaderText="Second Column" /> </Columns> </asp:GridView> Noticed that I've added a templatefield column so that we can add the radio button there. Also I have set up some BoundField columns and set the DataFields as RowNumber, Col1 and Col2. These columns are just dummy columns and i used it for the simplicity of this example. Now where these columns came from? These columns are created by hand at the code behind file of the ASPX. Here's the code below: private DataTable FillData() { DataTable dt = new DataTable(); DataRow dr = null; //Create DataTable columns dt.Columns.Add(new DataColumn("RowNumber", typeof(string))); dt.Columns.Add(new DataColumn("Col1", typeof(string))); dt.Columns.Add(new DataColumn("Col2", typeof(string))); //Create Row for each columns dr = dt.NewRow(); dr["RowNumber"] = 1; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 2; dr["Col1"] = "AA"; dr["Col2"] = "BB"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 3; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 4; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 5; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); return dt; } And here's the code for binding the GridView with the dummy data above. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { GridView1.DataSource = FillData(); GridView1.DataBind(); } } Okay we have now a GridView data with a radio button on each row. Now lets go ahead and switch back to ASPX mark up. In this example I'm going to use a JavaScript for validating the radio button to select one radio button at a time. Here's the javascript code below: function CheckOtherIsCheckedByGVID(rb) { var isChecked = rb.checked; var row = rb.parentNode.parentNode; if (isChecked) { row.style.backgroundColor = '#B6C4DE'; row.style.color = 'black'; } var currentRdbID = rb.id; parent = document.getElementById("<%= GridView1.ClientID %>"); var items = parent.getElementsByTagName('input'); for (i = 0; i < items.length; i++) { if (items[i].id != currentRdbID && items[i].type == "radio") { if (items[i].checked) { items[i].checked = false; items[i].parentNode.parentNode.style.backgroundColor = 'white'; items[i].parentNode.parentNode.style.color = '#696969'; } } } } The function above sets the row of the current selected radio button's style to determine that the row is selected and then loops through the radio buttons in the gridview and then de-select the previous selected radio button and set the row style back to its default. You can then call the javascript function above at onlick event of radio button like below: <asp:RadioButton ID="rb" runat="server" onclick="javascript:CheckOtherIsCheckedByGVID(this);" /> Here's the output below: On Load: After Selecting a Radio Button: As you have noticed, on initial load there's no default selected radio in the GridView. Now let's add a simple validation for that. We will basically display an error message if a user clicks a button that triggers a postback without selecting  a radio button in the GridView. Here's the javascript for the validation: function ValidateRadioButton(sender, args) { var gv = document.getElementById("<%= GridView1.ClientID %>"); var items = gv.getElementsByTagName('input'); for (var i = 0; i < items.length ; i++) { if (items[i].type == "radio") { if (items[i].checked) { args.IsValid = true; return; } else { args.IsValid = false; } } } } The function above loops through the rows in gridview and find all the radio buttons within it. It will then check each radio button checked property. If a radio is checked then set IsValid to true else set it to false.  The reason why I'm using IsValid is because I'm using the ASP validator control for validation. Now add the following mark up below under the GridView declaration: <br /> <asp:Label ID="lblMessage" runat="server" /> <br /> <asp:Button ID="btn" runat="server" Text="POST" onclick="btn_Click" ValidationGroup="GroupA" /> <asp:CustomValidator ID="CustomValidator1" runat="server" ErrorMessage="Please select row in the grid." ClientValidationFunction="ValidateRadioButton" ValidationGroup="GroupA" style="display:none"></asp:CustomValidator> <asp:ValidationSummary ID="ValidationSummary1" runat="server" ValidationGroup="GroupA" HeaderText="Error List:" DisplayMode="BulletList" ForeColor="Red" /> And then at Button Click event add this simple code below just to test if  the validation works: protected void btn_Click(object sender, EventArgs e) { lblMessage.Text = "Postback at: " + DateTime.Now.ToString("hh:mm:ss tt"); } Here's the output below that you can see in the browser:   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,JavaScript,GridView

    Read the article

  • Generate Strongly Typed Observable Events for the Reactive Extensions for .NET (Rx)

    - by Bobby Diaz
    I must have tried reading through the various explanations and introductions to the new Reactive Extensions for .NET before the concepts finally started sinking in.  The article that gave me the ah-ha moment was over on SilverlightShow.net and titled Using Reactive Extensions in Silverlight.  The author did a good job comparing the "normal" way of handling events vs. the new "reactive" methods. Admittedly, I still have more to learn about the Rx Framework, but I wanted to put together a sample project so I could start playing with the new Observable and IObservable<T> constructs.  I decided to throw together a whiteboard application in Silverlight based on the Drawing with Rx example on the aforementioned article.  At the very least, I figured I would learn a thing or two about a new technology, but my real goal is to create a fun application that I can share with the kids since they love drawing and coloring so much! Here is the code sample that I borrowed from the article: var mouseMoveEvent = Observable.FromEvent<MouseEventArgs>(this, "MouseMove"); var mouseLeftButtonDown = Observable.FromEvent<MouseButtonEventArgs>(this, "MouseLeftButtonDown"); var mouseLeftButtonUp = Observable.FromEvent<MouseButtonEventArgs>(this, "MouseLeftButtonUp");       var draggingEvents = from pos in mouseMoveEvent                              .SkipUntil(mouseLeftButtonDown)                              .TakeUntil(mouseLeftButtonUp)                              .Let(mm => mm.Zip(mm.Skip(1), (prev, cur) =>                                  new                                  {                                      X2 = cur.EventArgs.GetPosition(this).X,                                      X1 = prev.EventArgs.GetPosition(this).X,                                      Y2 = cur.EventArgs.GetPosition(this).Y,                                      Y1 = prev.EventArgs.GetPosition(this).Y                                  })).Repeat()                          select pos;       draggingEvents.Subscribe(p =>     {         Line line = new Line();         line.Stroke = new SolidColorBrush(Colors.Black);         line.StrokeEndLineCap = PenLineCap.Round;         line.StrokeLineJoin = PenLineJoin.Round;         line.StrokeThickness = 5;         line.X1 = p.X1;         line.Y1 = p.Y1;         line.X2 = p.X2;         line.Y2 = p.Y2;         this.LayoutRoot.Children.Add(line);     }); One thing that was nagging at the back of my mind was having to deal with the event names as strings, as well as the verbose syntax for the Observable.FromEvent<TEventArgs>() method.  I came up with a couple of static/helper classes to resolve both issues and also created a T4 template to auto-generate these helpers for any .NET type.  Take the following code from the above example: var mouseMoveEvent = Observable.FromEvent<MouseEventArgs>(this, "MouseMove"); var mouseLeftButtonDown = Observable.FromEvent<MouseButtonEventArgs>(this, "MouseLeftButtonDown"); var mouseLeftButtonUp = Observable.FromEvent<MouseButtonEventArgs>(this, "MouseLeftButtonUp"); Turns into this with the new static Events class: var mouseMoveEvent = Events.Mouse.Move.On(this); var mouseLeftButtonDown = Events.Mouse.LeftButtonDown.On(this); var mouseLeftButtonUp = Events.Mouse.LeftButtonUp.On(this); Or better yet, just remove the variable declarations altogether:     var draggingEvents = from pos in Events.Mouse.Move.On(this)                              .SkipUntil(Events.Mouse.LeftButtonDown.On(this))                              .TakeUntil(Events.Mouse.LeftButtonUp.On(this))                              .Let(mm => mm.Zip(mm.Skip(1), (prev, cur) =>                                  new                                  {                                      X2 = cur.EventArgs.GetPosition(this).X,                                      X1 = prev.EventArgs.GetPosition(this).X,                                      Y2 = cur.EventArgs.GetPosition(this).Y,                                      Y1 = prev.EventArgs.GetPosition(this).Y                                  })).Repeat()                          select pos; The Move, LeftButtonDown and LeftButtonUp members of the Events.Mouse class are readonly instances of the ObservableEvent<TTarget, TEventArgs> class that provide type-safe access to the events via the On() method.  Here is the code for the class: using System; using System.Collections.Generic; using System.Linq;   namespace System.Linq {     /// <summary>     /// Represents an event that can be managed via the <see cref="Observable"/> API.     /// </summary>     /// <typeparam name="TTarget">The type of the target.</typeparam>     /// <typeparam name="TEventArgs">The type of the event args.</typeparam>     public class ObservableEvent<TTarget, TEventArgs> where TEventArgs : EventArgs     {         /// <summary>         /// Initializes a new instance of the <see cref="ObservableEvent"/> class.         /// </summary>         /// <param name="eventName">Name of the event.</param>         protected ObservableEvent(String eventName)         {             EventName = eventName;         }           /// <summary>         /// Registers the specified event name.         /// </summary>         /// <param name="eventName">Name of the event.</param>         /// <returns></returns>         public static ObservableEvent<TTarget, TEventArgs> Register(String eventName)         {             return new ObservableEvent<TTarget, TEventArgs>(eventName);         }           /// <summary>         /// Creates an enumerable sequence of event values for the specified target.         /// </summary>         /// <param name="target">The target.</param>         /// <returns></returns>         public IObservable<IEvent<TEventArgs>> On(TTarget target)         {             return Observable.FromEvent<TEventArgs>(target, EventName);         }           /// <summary>         /// Gets or sets the name of the event.         /// </summary>         /// <value>The name of the event.</value>         public string EventName { get; private set; }     } } And this is how it's used:     /// <summary>     /// Categorizes <see cref="ObservableEvents"/> by class and/or functionality.     /// </summary>     public static partial class Events     {         /// <summary>         /// Implements a set of predefined <see cref="ObservableEvent"/>s         /// for the <see cref="System.Windows.System.Windows.UIElement"/> class         /// that represent mouse related events.         /// </summary>         public static partial class Mouse         {             /// <summary>Represents the MouseMove event.</summary>             public static readonly ObservableEvent<UIElement, MouseEventArgs> Move =                 ObservableEvent<UIElement, MouseEventArgs>.Register("MouseMove");               // additional members omitted...         }     } The source code contains a static Events class with prefedined members for various categories (Key, Mouse, etc.).  There is also an Events.tt template that you can customize to generate additional event categories for any .NET type.  All you should have to do is add the name of your class to the types collection near the top of the template:     types = new Dictionary<String, Type>()     {         //{ "Microsoft.Maps.MapControl.Map, Microsoft.Maps.MapControl", null }         { "System.Windows.FrameworkElement, System.Windows", null },         { "Whiteboard.MainPage, Whiteboard", null }     }; The template is also a bit rough at this point, but at least it generates code that *should* compile.  Please let me know if you run into any issues with it.  Some people have reported errors when trying to use T4 templates within a Silverlight project, but I was able to get it to work with a little black magic...  You can download the source code for this project or play around with the live demo.  Just be warned that it is at a very early stage so don't expect to find much today.  I plan on adding alot more options like pen colors and sizes, saving, printing, etc. as time permits.  HINT: hold down the ESC key to erase! Enjoy! Additional Resources Using Reactive Extensions in Silverlight DevLabs: Reactive Extensions for .NET (Rx) Rx Framework Part III - LINQ to Events - Generating GetEventName() Wrapper Methods using T4

    Read the article

  • WhatsApp &amp; Tasker for Android &ndash; Read &amp; Write messages

    - by Shaurya Anand
    So, I finally gave up on all my previous the Microsoft Mobile/Phone OS devices and made my switch to Android this year. I am using my Samsung Galaxy Note GT-N7000 with CyanogenMod 9.1.0 (http://get.cm/get/jenkins/7086/cm-9.1.0-n7000.zip) and ClockworkMod 6.0.1.2 (http://download2.clockworkmod.com/recoveries/recovery-clockwork-6.0.1.2-n7000.zip) since August this year and I am so happy with the performance and the flexibility it offers me. As a software developer by profession, I would expect most of my gadget to be highly customizable and programmable (one time or at intervals) to suit my needs as close as it can. I was introduced to Automation for Android – Tasker (https://play.google.com/store/apps/details?id=net.dinglisch.android.taskerm&hl=en) via reddit (http://www.reddit.com/r/tasker) and the word ‘automation’ was enough for me to dive right into this app. Only automation that I did earlier was switching profiles depending on location on there phones. And now, just imagine a complete set of possibilities that can be automate on the phone or via the phone. I did my research and found a couple of other tools that do the same/as close as what Tasker can do and few of them are even free. There’s one even by Microsoft called on{X} (https://play.google.com/store/apps/details?id=com.microsoft.onx.app&hl=en). Microsoft’s on{X} really caught my eye. You can write code for your phone on the web application by them, deploy it on your phone and even trace the flow all using your PC. Really brilliant, I love the fact that it’s all JavaScript. Here comes the but, it is still very very young and it’s policy of accessing my News Feed on Facebook is not something that I can not digest. On{X} is good, but as I said earlier, the API is not very mature and hence, I gave up on it. I bought Tasker, the best 5,00 € I spent in ages and I want to talk about it in this post. I am still a “noob” while operating this tool, but I tried my shot at automating WhatsApp (https://play.google.com/store/apps/details?id=com.whatsapp&hl=en), a popular messenger for various platform. The requirement for the automation is that, if I send a WhatsApp ‘wru’ message to the phone, it should respond back giving the location and battery level of my phone. It could be useful, if you like to locate your misplaced phone or automatically reply to your partner/friend, honestly, I don’t know what you will use it - through this post, I am just introducing automating WhatsApp using Tasker. Before we begin, the following script only works when your phone is rooted as we will be accessing the WhatsApp database and type some special characters like ‘:’. Let’s follow the code line by line: Profile:         Location request from XYZ. (12) // Name of your profile. Event:         Notification [ Owner Application:WhatsApp Title:* ] // When a new notification comes from WhatsApp, this event is fired. Read the end note, if you face problems with Chrome app after enabling Tasker accessibility. Enter:         A1: Run Shell [ Command:sqlite3 // We will access the WhatsApp database and check if the message comes from designated phone number or not. We mustn’t reply to every message.                 /data/data/com.whatsapp/databases/msgstore.db "SELECT _id, data FROM                  messages WHERE key_from_me='0' AND key_remote_jid LIKE '%XXXXXXXXXXX%' // Replace XXXXXXXXXXX with the phone number of your message sender.                 ORDER BY _id DESC LIMIT 1;" Timeout (Seconds):10 Use Root:On Store // I made a timeout for 10 seconds, if in case WhatsApp is busy accessing the database.                 Result In:%WHATSAPP_CURRREQ ] // Store the read Id and the last message on to the variable %WHATSAPP_CURRREQ         A2: If [ %WHATSAPP_CURRREQ ~R .*[wW][rR][uU].* ] // Check if the pattern of the message is correct and we are all set to send the location.                 A3: If [ %WHATSAPP_CURRREQ !~ %WHATSAPP_LASTREQ ] // Verify that the message is different from the last request. Remember every message has a unique Id.                         A4: Notify [ Title:WhatsApp location request... Text:Sending location // Just a notification that the location message is being prepared.                                 to Krati Gupta... Icon:<icon> Number:0 Permanent:On Priority:3 ] // Make a note it is a permanent notification, we will clear it later.                         A5: Secure Settings [ Configuration:Pattern Lock Disabled // I am disabling the pattern lock, that I use using the plugin Secure Settings.                                 Package:com.intangibleobject.securesettings.plugin Name:Secure // You can download the plugin from here: https://play.google.com/store/apps/details?id=com.intangibleobject.securesettings.plugin&hl=en                                 Settings ]                         A6: Secure Settings [ Configuration:Keyguard Disabled // Disable the keygaurd, it is useful, when your phone is on lock and you want to automate everything, even the typing.                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                         A7: Secure Settings [ Configuration:GPS Enabled // Pretty clear, turn on the GPS and get location at A8                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                         A8: AutoShortcut [ Configuration:WhatsApp: Some One // I am using AutoShortcut plugin (https://play.google.com/store/apps/details?id=com.joaomgcd.autoshortcut) to start WhatsApp with the indented recipient.                                 Package:com.joaomgcd.autoshortcut Name:AutoShortcut ] // Replace Some One, actually choose it from the plugin, the right recipient.                         A9: Get Location [ Source:Any Timeout (Seconds):30 Continue Task // I am getting the location, timeout is 30 seconds, adjust it accordingly.                                 Immediately:Off Keep Tracking:Off ]                         A10: Secure Settings [ Configuration:Screen Dim // Now, this extension of the plugin Secure Settings, wakes your device so that you can type out the string on the WhatsApp app.                                 5 Seconds Package:com.intangibleobject.securesettings.plugin                                 Name:Secure Settings ]                         A11: Run Shell [ Command:input text // Now, I am using the shell script to type the text to the window, because the ‘:’ while not be typed from the Type task in Tasker.                                 LOCATION:maps.google.com/maps?q=%LOC Timeout (Seconds):0 Use Root:On // And also, this is way faster, but remember you need root for this, not for the other way of typing.                                 Store Result In: ]                         A12: Dpad [ Button:Right Repeat Times:1 ] // Focus the Send button                         A13: Dpad [ Button:Press Repeat Times:1 ] // And press it.                         A14: Dpad [ Button:Left Repeat Times:1 ] // Get back to the typing box.                         A15: Run Shell [ Command:input text LOCATION_ACCURACY:%LOCACC Timeout                                 (Seconds):0 Use Root:On Store Result In: ]                         A16: Dpad [ Button:Right Repeat Times:1 ]                         A17: Dpad [ Button:Press Repeat Times:1 ]                         A18: Dpad [ Button:Left Repeat Times:1 ]                         A19: Run Shell [ Command:input text BATTERY_LEVEL:%BATT% Timeout // I am adding Battery level in my case as well.                                 (Seconds):0 Use Root:On Store Result In: ]                         A20: Dpad [ Button:Right Repeat Times:1 ]                         A21: Dpad [ Button:Press Repeat Times:1 ]                         A22: Variable Set [ Name:%WHATSAPP_LASTREQ To:%WHATSAPP_CURRREQ Do // And now, we say, request is done.                                 Maths:Off Append:Off ]                         A23: Button [ Button:Back ] // I am exiting the WhatsApp nicely and not killing it. If you are the murderer kind, kill it, just know, you don’t have any place in the heaven.                         A24: Button [ Button:Back ]                         A25: Notify Cancel [ Title: Warn Not Exist:Off ] // Remove the permanent notification.                         A26: Notify [ Title:WhatsApp location request Text:Location sent // Make a temporary notification, and say, location is sent.                                 successfully. Icon:<icon> Number:0 Permanent:Off Priority:3 ]                                                         A27: Secure Settings [ Configuration:GPS Disabled // Disable all the horrible things we turned on earlier.                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                         A28: Secure Settings [ Configuration:Pattern Lock Enabled                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                         A29: Secure Settings [ Configuration:Keyguard Enabled                                 Package:com.intangibleobject.securesettings.plugin Name:Secure                                 Settings ]                 A30: End If         A31: End If Download this Task from here: http://db.tt/9vRmbhyb That’s it in the above small example – you can read/write messages from/to WhatsApp app. I am using n7000-cm9.1-cwr6. Oh yea, and if you are having the Talkback auto enabled for Chrome browser, you need to turn Off the Web scripts to run. Tasker is amazing, I have automated a lot of tasks using this tool. I will share a few none generic ones with you in my coming post here.

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer

    - by Elton Stoneman
    This is the third in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer As the patterns get further from the simple .NET full-trust consumer, all that changes is the communication protocol and the authentication mechanism. In Part 3 the scenario is that we still have a secure .NET environment consuming our service, so we can store shared keys securely, but the runtime environment is locked down so we can't use Microsoft.ServiceBus to get the nice WCF relay bindings. To support this we will expose a RESTful endpoint through the Azure Service Bus, and require the consumer to send a security token with each HTTP service request. Pattern applicability This is a good fit for scenarios where: the runtime environment is secure enough to keep shared secrets the consumer can execute custom code, including building HTTP requests with custom headers the consumer cannot use the Azure SDK assemblies the service may need to know who is consuming it the service does not need to know who the end-user is Note there isn't actually a .NET requirement here. By exposing the service in a REST endpoint, anything that can talk HTTP can be a consumer. We'll authenticate through ACS which also gives us REST endpoints, so the service is still accessed securely. Our real-world example would be a hosted cloud app, where we we have enough room in the app's customisation to keep the shared secret somewhere safe and to hook in some HTTP calls. We will be flowing an identity through to the on-premise service now, but it will be the service identity given to the consuming app - the end user's identity isn't flown through yet. In this post, we’ll consume the service from Part 1 in ASP.NET using the WebHttpRelayBinding. The code for Part 3 (+ Part 1) is on GitHub here: IPASBR Part 3. Authenticating and authorizing with ACS We'll follow the previous examples and add a new service identity for the namespace in ACS, so we can separate permissions for different consumers (see walkthrough in Part 1). I've named the identity partialTrustConsumer. We’ll be authenticating against ACS with an explicit HTTP call, so we need a password credential rather than a symmetric key – for a nice secure option, generate a symmetric key, copy to the clipboard, then change type to password and paste in the key: We then need to do the same as in Part 2 , add a rule to map the incoming identity claim to an outgoing authorization claim that allows the identity to send messages to Service Bus: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: partialTrustConsumer Output claim type: net.windows.servicebus.action Output claim value: Send As with Part 2, this sets up a service identity which can send messages into Service Bus, but cannot register itself as a listener, or manage the namespace. RESTfully exposing the on-premise service through Azure Service Bus Relay The part 3 sample code is ready to go, just put your Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files.  But to do it yourself is very simple. We already have a WebGet attribute in the service for locally making REST calls, so we are just going to add a new endpoint which uses the WebHttpRelayBinding to relay that service through Azure. It's as easy as adding this endpoint to Web.config for the service:         <endpoint address="https://sixeyed-ipasbr.servicebus.windows.net/rest"                   binding="webHttpRelayBinding"                    contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> - and adding the webHttp attribute in your endpoint behavior:           <behavior name="SharedSecret">             <webHttp/>             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="gl0xaVmlebKKJUAnpripKhr8YnLf9Neaf6LR53N8uGs="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> Where's my WSDL? The metadata story for REST is a bit less automated. In our local webHttp endpoint we've enabled WCF's built-in help, so if you navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/help - you'll see the uri format for making a GET request to the service. The format is the same over Azure, so this is where you'll be connecting: https://[your-namespace].servicebus.windows.net/rest/reverse?string=abc123 Build the service with the new endpoint, open that in a browser and you'll get an XML version of an HTTP status code - a 401 with an error message stating that you haven’t provided an authorization header: <?xml version="1.0"?><Error><Code>401</Code><Detail>MissingToken: The request contains no authorization header..TrackingId:4cb53408-646b-4163-87b9-bc2b20cdfb75_5,TimeStamp:10/3/2012 8:34:07 PM</Detail></Error> By default, the setup of your Service Bus endpoint as a relying party in ACS expects a Simple Web Token to be presented with each service request, and in the browser we're not passing one, so we can't access the service. Note that this request doesn't get anywhere near your on-premise service, Service Bus only relays requests once they've got the necessary approval from ACS. Why didn't the consumer need to get ACS authorization in Part 2? It did, but it was all done behind the scenes in the NetTcpRelayBinding. By specifying our Shared Secret credentials in the consumer, the service call is preceded by a check on ACS to see that the identity provided is a) valid, and b) allowed access to our Service Bus endpoint. By making manual HTTP requests, we need to take care of that ACS check ourselves now. We do that with a simple WebClient call to the ACS endpoint of our service; passing the shared secret credentials, we will get back an SWT: var values = new System.Collections.Specialized.NameValueCollection(); values.Add("wrap_name", "partialTrustConsumer"); //service identity name values.Add("wrap_password", "suCei7AzdXY9toVH+S47C4TVyXO/UUFzu0zZiSCp64Y="); //service identity password values.Add("wrap_scope", "http://sixeyed-ipasbr.servicebus.windows.net/"); //this is the realm of the RP in ACS var acsClient = new WebClient(); var responseBytes = acsClient.UploadValues("https://sixeyed-ipasbr-sb.accesscontrol.windows.net/WRAPv0.9/", "POST", values); rawToken = System.Text.Encoding.UTF8.GetString(responseBytes); With a little manipulation, we then attach the SWT to subsequent REST calls in the authorization header; the token contains the Send claim returned from ACS, so we will be authorized to send messages into Service Bus. Running the sample Navigate to http://localhost:2028/Sixeyed.Ipasbr.WebHttpClient/Default.cshtml, enter a string and hit Go! - your string will be reversed by your on-premise service, routed through Azure: Using shared secret client credentials in this way means ACS is the identity provider for your service, and the claim which allows Send access to Service Bus is consumed by Service Bus. None of the authentication details make it through to your service, so your service is not aware who the consumer is (MSDN calls this "anonymous authentication").

    Read the article

  • Apache 2 Virtual Hosts no working on OSX 10.6

    - by matt_lethargic
    This is my first MacBook and I'm trying to get virtual hosts up and running so as it's going to be my dev machine. I've got apache/php/mysql running fine, the problem is that what ever address I go to I just get one of the virtual hosts I've setup. I can't even get to the root site anymore. I had phpmyadmin setup on http://localhost/pma but now that comes up with an error. If I take out the vhosts config file it seems to work again. I've put all my configs I can think you'll need below. ############## httpd config ############# ServerRoot "/usr" Listen 80 LoadModule authn_file_module libexec/apache2/mod_authn_file.so LoadModule authn_dbm_module libexec/apache2/mod_authn_dbm.so LoadModule authn_anon_module libexec/apache2/mod_authn_anon.so LoadModule authn_dbd_module libexec/apache2/mod_authn_dbd.so LoadModule authn_default_module libexec/apache2/mod_authn_default.so LoadModule authz_host_module libexec/apache2/mod_authz_host.so LoadModule authz_groupfile_module libexec/apache2/mod_authz_groupfile.so LoadModule authz_user_module libexec/apache2/mod_authz_user.so LoadModule authz_dbm_module libexec/apache2/mod_authz_dbm.so LoadModule authz_owner_module libexec/apache2/mod_authz_owner.so LoadModule authz_default_module libexec/apache2/mod_authz_default.so LoadModule auth_basic_module libexec/apache2/mod_auth_basic.so LoadModule auth_digest_module libexec/apache2/mod_auth_digest.so LoadModule cache_module libexec/apache2/mod_cache.so LoadModule disk_cache_module libexec/apache2/mod_disk_cache.so LoadModule mem_cache_module libexec/apache2/mod_mem_cache.so LoadModule dbd_module libexec/apache2/mod_dbd.so LoadModule dumpio_module libexec/apache2/mod_dumpio.so LoadModule reqtimeout_module libexec/apache2/mod_reqtimeout.so LoadModule ext_filter_module libexec/apache2/mod_ext_filter.so LoadModule include_module libexec/apache2/mod_include.so LoadModule filter_module libexec/apache2/mod_filter.so LoadModule substitute_module libexec/apache2/mod_substitute.so LoadModule deflate_module libexec/apache2/mod_deflate.so LoadModule log_config_module libexec/apache2/mod_log_config.so LoadModule log_forensic_module libexec/apache2/mod_log_forensic.so LoadModule logio_module libexec/apache2/mod_logio.so LoadModule env_module libexec/apache2/mod_env.so LoadModule mime_magic_module libexec/apache2/mod_mime_magic.so LoadModule cern_meta_module libexec/apache2/mod_cern_meta.so LoadModule expires_module libexec/apache2/mod_expires.so LoadModule headers_module libexec/apache2/mod_headers.so LoadModule ident_module libexec/apache2/mod_ident.so LoadModule usertrack_module libexec/apache2/mod_usertrack.so LoadModule setenvif_module libexec/apache2/mod_setenvif.so LoadModule version_module libexec/apache2/mod_version.so LoadModule proxy_module libexec/apache2/mod_proxy.so LoadModule proxy_connect_module libexec/apache2/mod_proxy_connect.so LoadModule proxy_ftp_module libexec/apache2/mod_proxy_ftp.so LoadModule proxy_http_module libexec/apache2/mod_proxy_http.so LoadModule proxy_scgi_module libexec/apache2/mod_proxy_scgi.so LoadModule proxy_ajp_module libexec/apache2/mod_proxy_ajp.so LoadModule proxy_balancer_module libexec/apache2/mod_proxy_balancer.so LoadModule ssl_module libexec/apache2/mod_ssl.so LoadModule mime_module libexec/apache2/mod_mime.so LoadModule dav_module libexec/apache2/mod_dav.so LoadModule status_module libexec/apache2/mod_status.so LoadModule autoindex_module libexec/apache2/mod_autoindex.so LoadModule asis_module libexec/apache2/mod_asis.so LoadModule info_module libexec/apache2/mod_info.so LoadModule cgi_module libexec/apache2/mod_cgi.so LoadModule dav_fs_module libexec/apache2/mod_dav_fs.so LoadModule vhost_alias_module libexec/apache2/mod_vhost_alias.so LoadModule negotiation_module libexec/apache2/mod_negotiation.so LoadModule dir_module libexec/apache2/mod_dir.so LoadModule imagemap_module libexec/apache2/mod_imagemap.so LoadModule actions_module libexec/apache2/mod_actions.so LoadModule speling_module libexec/apache2/mod_speling.so LoadModule userdir_module libexec/apache2/mod_userdir.so LoadModule alias_module libexec/apache2/mod_alias.so LoadModule rewrite_module libexec/apache2/mod_rewrite.so LoadModule bonjour_module libexec/apache2/mod_bonjour.so LoadModule php5_module libexec/apache2/libphp5.so <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> User _www Group _www </IfModule> </IfModule> ServerAdmin [email protected] ServerName localhost:80 DocumentRoot "/Library/WebServer/Documents" <Directory /> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> <Directory "/Library/WebServer/Documents"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.html </IfModule> <FilesMatch "^\.([Hh][Tt]|[Dd][Ss]_[Ss])"> Order allow,deny Deny from all Satisfy All </FilesMatch> <Files "rsrc"> Order allow,deny Deny from all Satisfy All </Files> <DirectoryMatch ".*\.\.namedfork"> Order allow,deny Deny from all Satisfy All </DirectoryMatch> ErrorLog "/private/var/log/apache2/error_log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog "/private/var/log/apache2/access_log" common </IfModule> <IfModule alias_module> ScriptAliasMatch ^/cgi-bin/((?!(?i:webobjects)).*$) "/Library/WebServer/CGI-Executables/$1" </IfModule> <Directory "/Library/WebServer/CGI-Executables"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig /private/etc/apache2/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz </IfModule> TraceEnable off Include /private/etc/apache2/extra/httpd-mpm.conf Include /private/etc/apache2/extra/httpd-autoindex.conf Include /private/etc/apache2/extra/httpd-languages.conf Include /private/etc/apache2/extra/httpd-userdir.conf Include /private/etc/apache2/extra/httpd-vhosts.conf Include /private/etc/apache2/extra/httpd-manual.conf <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> Include /private/etc/apache2/other/*.conf ############# httpd-vhosts ################ NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/Users/matt/Workspace/farmers-arms/website/farmers_arms" ServerName dev.farmers ServerAlias www.dev.farmers ErrorLog "/private/var/log/apache2/localhost.farmers-error_log" CustomLog "/private/var/log/apache2/localhost.farmers-access_log" common <Directory "/Users/matt/Workspace/farmers-arms/website/farmers_arms"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> Hosts file 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1 dev.farmers 127.0.0.1 dev.hft Help!!!

    Read the article

  • My linux server takes more than an hour to boot. Suggestions?

    - by jamieb
    I am building a CentOS 5.4 system that boots off a compact flash card using a card reader that emulates an IDE drive. It literally takes about an hour to boot. The ultra-slow part occurs when Grub is loading the kernel. Once that's done, the rest of the boot process only takes about a minute to get to a login prompt. Does anyone have any suggestions? I suspect that it may have to do with UDMA. Everything IDE-related in my BIOS seems to checkout. The read performance hdparm is telling me 1.77 MB/s. Ouch! (But even at that rate, it still shouldn't take an hour to decompress and load the kernel) [root@server ~]# hdparm -tT /dev/hdc /dev/hdc: Timing cached reads: 2444 MB in 2.00 seconds = 1222.04 MB/sec Timing buffered disk reads: 6 MB in 3.39 seconds = 1.77 MB/sec Trying to enable DMA is a no-go though: [root@server ~]# hdparm -d1 /dev/hdc /dev/hdc: setting using_dma to 1 (on) HDIO_SET_DMA failed: Operation not permitted using_dma = 0 (off) Here's some command outputs that might help: System [root@server ~]# uname -a Linux server.localdomain 2.6.18-164.el5xen #1 SMP Thu Sep 3 04:47:32 EDT 2009 i686 i686 i386 GNU/Linux PCI info: [root@server ~]# lspci -v 00:00.0 Host bridge: Intel Corporation 82945G/GZ/P/PL Memory Controller Hub (rev 02) Subsystem: Intel Corporation 82945G/GZ/P/PL Memory Controller Hub Flags: bus master, fast devsel, latency 0 Capabilities: [e0] Vendor Specific Information 00:02.0 VGA compatible controller: Intel Corporation 82945G/GZ Integrated Graphics Controller (rev 02) (prog-if 00 [VGA controller]) Subsystem: Intel Corporation 82945G/GZ Integrated Graphics Controller Flags: bus master, fast devsel, latency 0, IRQ 10 Memory at fdf00000 (32-bit, non-prefetchable) [size=512K] I/O ports at ff00 [size=8] Memory at d0000000 (32-bit, prefetchable) [size=256M] Memory at fdf80000 (32-bit, non-prefetchable) [size=256K] Capabilities: [90] Message Signalled Interrupts: 64bit- Queue=0/0 Enable- Capabilities: [d0] Power Management version 2 00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 Flags: bus master, medium devsel, latency 0, IRQ 16 I/O ports at fe00 [size=32] 00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 Flags: bus master, medium devsel, latency 0, IRQ 17 I/O ports at fd00 [size=32] 00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 Flags: bus master, medium devsel, latency 0, IRQ 18 I/O ports at fc00 [size=32] 00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 (rev 01) (prog-if 00 [UHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 Flags: bus master, medium devsel, latency 0, IRQ 19 I/O ports at fb00 [size=32] 00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 01) (prog-if 20 [EHCI]) Subsystem: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller Flags: bus master, medium devsel, latency 0, IRQ 16 Memory at fdfff000 (32-bit, non-prefetchable) [size=1K] Capabilities: [50] Power Management version 2 Capabilities: [58] Debug port 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) (prog-if 01 [Subtractive decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=01, subordinate=01, sec-latency=32 I/O behind bridge: 0000d000-0000dfff Memory behind bridge: fde00000-fdefffff Prefetchable memory behind bridge: 00000000fdd00000-00000000fdd00000 Capabilities: [50] #0d [0000] 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) Subsystem: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge Flags: bus master, medium devsel, latency 0 Capabilities: [e0] Vendor Specific Information 00:1f.2 IDE interface: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller (rev 01) (prog-if 80 [Master]) Subsystem: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller Flags: bus master, 66MHz, medium devsel, latency 0, IRQ 17 I/O ports at <unassigned> I/O ports at <unassigned> I/O ports at <unassigned> I/O ports at <unassigned> I/O ports at f800 [size=16] Capabilities: [70] Power Management version 2 00:1f.3 SMBus: Intel Corporation 82801G (ICH7 Family) SMBus Controller (rev 01) Subsystem: Intel Corporation 82801G (ICH7 Family) SMBus Controller Flags: medium devsel, IRQ 17 I/O ports at 0500 [size=32] 01:04.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Subsystem: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ Flags: bus master, medium devsel, latency 32, IRQ 18 I/O ports at de00 [size=256] Memory at fdeff000 (32-bit, non-prefetchable) [size=256] Capabilities: [50] Power Management version 2 01:06.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Subsystem: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ Flags: bus master, medium devsel, latency 32, IRQ 17 I/O ports at dc00 [size=256] Memory at fdefe000 (32-bit, non-prefetchable) [size=256] Capabilities: [50] Power Management version 2 01:07.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Subsystem: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ Flags: bus master, medium devsel, latency 32, IRQ 19 I/O ports at da00 [size=256] Memory at fdefd000 (32-bit, non-prefetchable) [size=256] Capabilities: [50] Power Management version 2 hdparm ouput: [root@server ~]# hdparm /dev/hdc /dev/hdc: multcount = 0 (off) IO_support = 0 (default 16-bit) unmaskirq = 0 (off) using_dma = 0 (off) keepsettings = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 8146/16/63, sectors = 8211168, start = 0 [root@server ~]# hdparm -I /dev/hdc /dev/hdc: ATA device, with non-removable media Model Number: InnoDisk Corp. - iCF4000 4GB Serial Number: 20091023AACA70000753 Firmware Revision: 081107 Standards: Supported: 5 Likely used: 6 Configuration: Logical max current cylinders 8146 8146 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors: 8211168 LBA user addressable sectors: 8211168 device size with M = 1024*1024: 4009 MBytes device size with M = 1000*1000: 4204 MBytes (4 GB) Capabilities: LBA, IORDY(can be disabled) Standby timer values: spec'd by Vendor R/W multiple sector transfer: Max = 2 Current = 2 DMA: mdma0 mdma1 mdma2 udma0 udma1 *udma2 udma3 udma4 Cycle time: min=120ns recommended=120ns PIO: pio0 pio1 pio2 pio3 pio4 Cycle time: no flow control=120ns IORDY flow control=120ns Commands/features: Enabled Supported: * Power Management feature set * WRITE_BUFFER command * READ_BUFFER command * NOP cmd * CFA feature set * Mandatory FLUSH_CACHE HW reset results: CBLID- above Vih Device num = 0 CFA power mode 1: enabled and required by some commands Maximum current = 100ma Checksum: correct

    Read the article

  • Ubuntu 10.04 recognizing USB 2.0 external HD as USB 1.1

    - by btucker
    When I connect the USB 2.0 drive I see this: usb 1-4.3: new full speed USB device using ohci_hcd and address 5 so I know it's getting seen as USB 1.1. usb-devices shows that it really is USB 2.0 and connected to a USB 2.0 hub: T: Bus=01 Lev=01 Prnt=01 Port=03 Cnt=01 Dev#= 2 Spd=12 MxCh= 4 D: Ver= 2.00 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=05e3 ProdID=0608 Rev=77.61 S: Product=USB2.0 Hub C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub T: Bus=01 Lev=02 Prnt=02 Port=01 Cnt=01 Dev#= 4 Spd=12 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=13fd ProdID=1340 Rev=02.10 S: Manufacturer=Generic S: Product=External C: #Ifs= 1 Cfg#= 1 Atr=c0 MxPwr=2mA I: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage It seems the problem is that root hub is: T: Bus=01 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=12 MxCh=10 D: Ver= 1.10 Cls=09(hub ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=1d6b ProdID=0001 Rev=02.06 S: Manufacturer=Linux 2.6.32-25-server ohci_hcd S: Product=OHCI Host Controller S: SerialNumber=0000:00:02.0 C: #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub And there's no mention of ehci_hcd. lsusb -t gives me: /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=ohci_hcd/10p, 12M |__ Port 4: Dev 2, If 0, Class=hub, Driver=hub/4p, 12M |__ Port 2: Dev 4, If 0, Class=stor., Driver=usb-storage, 12M |__ Port 3: Dev 5, If 0, Class=stor., Driver=usb-storage, 12M |__ Port 6: Dev 3, If 0, Class=stor., Driver=usb-storage, 12M It seems like I'm missing something which would allow the OS to see USB 2.0 devices. Can anyone point me in the right direction? EDIT Full lsusb -v output: Bus 001 Device 005: ID 13fd:1340 Initio Corporation Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x13fd Initio Corporation idProduct 0x1340 bcdDevice 2.10 iManufacturer 1 Generic iProduct 2 External iSerial 3 57442D574341595930323337 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 2mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 001 Device 002: ID 05e3:0608 Genesys Logic, Inc. USB-2.0 4-Port HUB Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x05e3 Genesys Logic, Inc. idProduct 0x0608 USB-2.0 4-Port HUB bcdDevice 77.61 iManufacturer 0 iProduct 1 USB2.0 Hub iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 100mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0001 1x 1 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x00e0 Ganged power switching Ganged overcurrent protection Port indicators bPwrOn2PwrGood 50 * 2 milli seconds bHubContrCurrent 100 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0103 power enable connect Port 3: 0000.0103 power enable connect Port 4: 0000.0100 power Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 1 Single TT bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.32-25-server ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:02.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 11 bDescriptorType 41 nNbrPorts 10 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 1 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 0x00 PortPwrCtrlMask 0xff 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0103 power enable connect Port 5: 0000.0100 power Port 6: 0000.0103 power enable connect Port 7: 0000.0100 power Port 8: 0000.0100 power Port 9: 0000.0100 power Port 10: 0000.0100 power Device Status: 0x0003 Self Powered Remote Wakeup Enabled

    Read the article

< Previous Page | 7 8 9 10 11 12  | Next Page >