Search Results

Search found 5521 results on 221 pages for 'eric low'.

Page 77/221 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Solaris 11 VNC Server is "blurry" or "smeared"

    - by user12620111
    I've been annoyed by quality of the image that is displayed by my VNC viewer when I visit a Solaris 11 VNC server. How should I describe the image? Blurry? Grainy? Smeared? Low resolution? Compressed? Badly encoded? This is what I have gotten used to seeing on Solaris 11: This is not a problem for me when I view Solaris 10 VNC servers. I've finally taken the time to investigate, and the solution is simple. On the VNC client, don't allow "Tight" encoding. My VNC Viewer will negotiate to Tight encoding if it is available. When negotiating with the Solaris 10 VNC server, Tight is not a supported option, so the Solaris 10 server and my client will agree on ZRLE.  Now that I have disabled Tight encoding on my VNC client, the Solaris 11 VNC Servers looks much better: How should I describe the display when my VNC client is forced to negotiate to ZRLE encoding with the Solaris 11 VNC Server? Crisp? Clear? Higher resolution? Using a lossless compression algorithm? When I'm on a low bandwidth connection, I may re-enable Tight compression on my laptop. In the mean time, the ZRLE compression is sufficient for a coast-to-coast desktop, through the corporate firewall, encoded with VPN, through my ISP and onto my laptop. YMMV.

    Read the article

  • Are my negative internship experiences representative of the real world? [closed]

    - by attemptAtAnonymity
    I'm curious if my current experiences as an intern are representative of actual industry. As background, I'm through the better part of two computing majors and a math major at a major university; I've aced every class and adored all of them, so I'd like to think that I'm not terrible at programming. I got an internship with one of the major software companies, and half way through now I've been shocked at the extraordinarily low quality of code. Comments don't exist, it's all spaghetti code, and everything that could be wrong is even worse. I've done a ton of tutoring/TAing, so I'm very used to reading bad code, but the major industry products I've been seeing trump all of that. I work 10-12 hours a day and never feel like I'm getting anywhere, because it's endless hours of trying to figure out an undocumented API or determine the behavior of some other part of the (completely undocumented) product. I've left work hating the job every day so far, and I desperately want to know if this is what is in store for the rest of my life. Did I draw a short straw on internships (the absurdly large paychecks imply that it's not a low quality position), or is this what the real world is like?

    Read the article

  • Are my negative internship experiences respresentative of the real world?

    - by attemptAtAnonymity
    I'm curious if my current experiences as an intern are representative of actual industry. As background, I'm through the better part of two computing majors and a math major at a major university; I've aced every class and adored all of them, so I'd like to think that I'm not terrible at programming. I got an internship with one of the major software companies, and half way through now I've been shocked at the extraordinarily low quality of code. Comments don't exist, it's all spaghetti code, and everything that could be wrong is even worse. I've done a ton of tutoring/TAing, so I'm very used to reading bad code, but the major industry products I've been seeing trump all of that. I work 10-12 hours a day and never feel like I'm getting anywhere, because it's endless hours of trying to figure out an undocumented API or determine the behavior of some other part of the (completely undocumented) product. I've left work hating the job every day so far, and I desperately want to know if this is what is in store for the rest of my life. Did I draw a short straw on internships (the absurdly large paychecks imply that it's not a low quality position), or is this what the real world is like?

    Read the article

  • Having trouble returning a value from a method call when sending an array and the program is error out when run in reference to the sort

    - by programmerNOOB
    I am getting the following output when this program is run: Please enter the Social Security Number for taxpayer 0: 111111111 Please enter the gross income for taxpayer 0: 20000 Please enter the Social Security Number for taxpayer 1: 555555555 Please enter the gross income for taxpayer 1: 50000 Please enter the Social Security Number for taxpayer 2: 333333333 Please enter the gross income for taxpayer 2: 5464166 Please enter the Social Security Number for taxpayer 3: 222222222 Please enter the gross income for taxpayer 3: 645641 Please enter the Social Security Number for taxpayer 4: 444444444 Please enter the gross income for taxpayer 4: 29000 Taxpayer # 1 SSN: 111111111, Income is $20,000.00, Tax is $0.00 Taxpayer # 2 SSN: 555555555, Income is $50,000.00, Tax is $0.00 Taxpayer # 3 SSN: 333333333, Income is $5,464,166.00, Tax is $0.00 Taxpayer # 4 SSN: 222222222, Income is $645,641.00, Tax is $0.00 Taxpayer # 5 SSN: 444444444, Income is $29,000.00, Tax is $0.00 Unhandled Exception: System.InvalidOperationException: Failed to compare two elements in the array. --- System.ArgumentException: At least one object must implement IComparable. at System.Collections.Comparer.Compare(Object a, Object b) at System.Collections.Generic.ObjectComparer`1.Compare(T x, T y) at System.Collections.Generic.ArraySortHelper`1.SwapIfGreaterWithItems(T[] keys, IComparer`1 comparer, Int32 a, Int32 b) at System.Collections.Generic.ArraySortHelper`1.QuickSort(T[] keys, Int32 left, Int32 right, IComparer`1 comparer) at System.Collections.Generic.ArraySortHelper`1.Sort(T[] keys, Int32 index, Int32 length, IComparer`1 comparer) --- End of inner exception stack trace --- at System.Collections.Generic.ArraySortHelper`1.Sort(T[] keys, Int32 index, Int32 length, IComparer`1 comparer) at System.Array.Sort[T](T[] array, Int32 index, Int32 length, IComparer`1 comparer) at System.Array.Sort[T](T[] array) at Assignment5.Taxpayer.Main(String[] args) in Program.cs:line 150 Notice the 0s at the end of the line that should be the tax amount??? Here is the code: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace taxes { class Rates { // Create a class named rates that has the following data members: private int incLimit; private double lowTaxRate; private double highTaxRate; // use read-only accessor public int IncomeLimit { get { return incLimit; } } public double LowTaxRate { get { return lowTaxRate; } } public double HighTaxRate { get { return highTaxRate; } } //A class constructor that assigns default values public Rates() { int limit = 30000; double lowRate = .15; double highRate = .28; incLimit = limit; lowTaxRate = lowRate; highTaxRate = highRate; } //A class constructor that takes three parameters to assign input values for limit, low rate and high rate. public Rates(int limit, double lowRate, double highRate) { } // A CalculateTax method that takes an income parameter and computes the tax as follows: public int CalculateTax(int income) { int limit = 0; double lowRate = 0; double highRate = 0; int taxOwed = 0; // If income is less than the limit then return the tax as income times low rate. if (income < limit) taxOwed = Convert.ToInt32(income * lowRate); // If income is greater than or equal to the limit then return the tax as income times high rate. if (income >= limit) taxOwed = Convert.ToInt32(income * highRate); return taxOwed; } } //end class Rates // Create a class named Taxpayer that has the following data members: public class Taxpayer { //Use get and set accessors. string SSN { get; set; } int grossIncome { get; set; } // Use read-only accessor. public int taxOwed { get { return taxOwed; } } // The Taxpayer class should be set up so that its objects are comparable to each other based on tax owed. class taxpayer : IComparable { public int taxOwed { get; set; } public int income { get; set; } int IComparable.CompareTo(Object o) { int returnVal; taxpayer temp = (taxpayer)o; if (this.taxOwed > temp.taxOwed) returnVal = 1; else if (this.taxOwed < temp.taxOwed) returnVal = -1; else returnVal = 0; return returnVal; } // End IComparable.CompareTo } //end taxpayer IComparable class // **The tax should be calculated whenever the income is set. // The Taxpayer class should have a getRates class method that has the following. public static void GetRates() { // Local method data members for income limit, low rate and high rate. int incLimit = 0; double lowRate; double highRate; string userInput; // Prompt the user to enter a selection for either default settings or user input of settings. Console.Write("Would you like the default values (D) or would you like to enter the values (E)?: "); /* If the user selects default the default values you will instantiate a rates object using the default constructor * and set the Taxpayer class data member for tax equal to the value returned from calling the rates object CalculateTax method.*/ userInput = Convert.ToString(Console.ReadLine()); if (userInput == "D" || userInput == "d") { Rates rates = new Rates(); rates.CalculateTax(incLimit); } // end if /* If the user selects to enter the rates data then prompt the user to enter values for income limit, low rate and high rate, * instantiate a rates object using the three-argument constructor passing those three entries as the constructor arguments and * set the Taxpayer class data member for tax equal to the valuereturned from calling the rates object CalculateTax method. */ if (userInput == "E" || userInput == "e") { Console.Write("Please enter the income limit: "); incLimit = Convert.ToInt32(Console.ReadLine()); Console.Write("Please enter the low rate: "); lowRate = Convert.ToDouble(Console.ReadLine()); Console.Write("Please enter the high rate: "); highRate = Convert.ToDouble(Console.ReadLine()); Rates rates = new Rates(incLimit, lowRate, highRate); rates.CalculateTax(incLimit); } } static void Main(string[] args) { Taxpayer[] taxArray = new Taxpayer[5]; Rates taxRates = new Rates(); // Implement a for-loop that will prompt the user to enter the Social Security Number and gross income. for (int x = 0; x < taxArray.Length; ++x) { taxArray[x] = new Taxpayer(); Console.Write("Please enter the Social Security Number for taxpayer {0}: ", x + 1); taxArray[x].SSN = Console.ReadLine(); Console.Write("Please enter the gross income for taxpayer {0}: ", x + 1); taxArray[x].grossIncome = Convert.ToInt32(Console.ReadLine()); } Taxpayer.GetRates(); // Implement a for-loop that will display each object as formatted taxpayer SSN, income and calculated tax. for (int i = 0; i < taxArray.Length; ++i) { Console.WriteLine("Taxpayer # {0} SSN: {1}, Income is {2:c}, Tax is {3:c}", i + 1, taxArray[i].SSN, taxArray[i].grossIncome, taxRates.CalculateTax(taxArray[i].grossIncome)); } // end for // Implement a for-loop that will sort the five objects in order by the amount of tax owed Array.Sort(taxArray); Console.WriteLine("Sorted by tax owed"); for (int i = 0; i < taxArray.Length; ++i) { Console.WriteLine("Taxpayer # {0} SSN: {1}, Income is {2:c}, Tax is {3:c}", i + 1, taxArray[i].SSN, taxArray[i].grossIncome, taxRates.CalculateTax(taxArray[i].grossIncome)); } } //end main } // end Taxpayer class } //end Any clues as to why the dollar amount is coming up as 0 and why the sort is not working?

    Read the article

  • Weird IIS with Windows Authentication + IE problem

    - by Paulius Maruška
    Hello. I have a website running on IIS and using Windows Authentication. All users that are configured to get access to the site are form a AD domain (not local users). In the properties of a Website, I have set to use the AD domain as the realm. Now, when using Firefox, Safari or Chrome - Everything is fine. When the user tries to open the site, he get's the login box. he enters simply "username" and "password" (let's pretend that it's an actual login and password :P) and he get's into the site. When using IE, however, things get nasty. When the user tries to open the site - he get's the login box. User enters the "username" and "password" again, but those get rejected! And when the second time login box pops up - it has the username filled in as "web-server-domain-name\username" which is wrong, because web-server-domain-name is not the domain where all users reside (it's "ad-domain"). I've spent days trying to figure out what's going on... Note, that if I manually enter "ad-domain\username" - I get accepted into the site without problems. So, my guess is that IE sends wrong username if domain is not specified. Anyway, IE is the only browser that triggers this behavior! Is it possible to do a server-side fix? Maybe it's possible to somehow auto-map the users to AD users? If it's not solvable server-side - is there a client-side fix for this? Thank you. PS: I'm more of a programmer than a sys-admin, so configuring servers isn't the strong side of mine... :P UPDATE: @Evan: Yes, "Digest authentication for Windows domain servers" is also enabled. @Eric: IIS version is 6.0. The authentication methods enabled are: Integrated and digest - all other methods are disabled. As for the security log. I looked at it, when doing "username" and "password" login in Chrome/Firefox and when doing "ad-domain\username" and "password" login from IE - the generated log messages are the same (I see no difference, anyway). When entering "username" and "password" I don't see any errors in the security (or any other) log, so can't tell what method it's trying to use. UPDATE 2: As suggested by Eric in the comments - I played around with Fiddler... While playing with it, I noticed, that when "username" and "password" is entered in FF and IE - the "Authorization" header value (encrypted) sent by IE is longer (almost two times) than one sent by FF. I tried to disable Windows Integrated authentication and only leave the Digest enabled - that fixed the problem (meaning, IE used the right realm just like other browsers), but that caused bazillion other problems with my site, because with Digest - user impersonation on the server doesn't work (that causes problems, when connecting to database etc). Any ideas?

    Read the article

  • Weird IIS with Windows Authentication + IE problem

    - by Paulius Maruška
    I have a website running on IIS and using Windows Authentication. All users that are configured to get access to the site are form a AD domain (not local users). In the properties of a Website, I have set to use the AD domain as the realm. Now, when using Firefox, Safari or Chrome - Everything is fine. When the user tries to open the site, he get's the login box. he enters simply "username" and "password" (let's pretend that it's an actual login and password :P) and he get's into the site. When using IE, however, things get nasty. When the user tries to open the site - he get's the login box. User enters the "username" and "password" again, but those get rejected! And when the second time login box pops up - it has the username filled in as "web-server-domain-name\username" which is wrong, because web-server-domain-name is not the domain where all users reside (it's "ad-domain"). I've spent days trying to figure out what's going on... Note, that if I manually enter "ad-domain\username" - I get accepted into the site without problems. So, my guess is that IE sends wrong username if domain is not specified. Anyway, IE is the only browser that triggers this behavior! Is it possible to do a server-side fix? Maybe it's possible to somehow auto-map the users to AD users? If it's not solvable server-side - is there a client-side fix for this? Thank you. PS: I'm more of a programmer than a sys-admin, so configuring servers isn't the strong side of mine... :P UPDATE: @Evan: Yes, "Digest authentication for Windows domain servers" is also enabled. @Eric: IIS version is 6.0. The authentication methods enabled are: Integrated and digest - all other methods are disabled. As for the security log. I looked at it, when doing "username" and "password" login in Chrome/Firefox and when doing "ad-domain\username" and "password" login from IE - the generated log messages are the same (I see no difference, anyway). When entering "username" and "password" I don't see any errors in the security (or any other) log, so can't tell what method it's trying to use. UPDATE 2: As suggested by Eric in the comments - I played around with Fiddler... While playing with it, I noticed, that when "username" and "password" is entered in FF and IE - the "Authorization" header value (encrypted) sent by IE is longer (almost two times) than one sent by FF. I tried to disable Windows Integrated authentication and only leave the Digest enabled - that fixed the problem (meaning, IE used the right realm just like other browsers), but that caused bazillion other problems with my site, because with Digest - user impersonation on the server doesn't work (that causes problems, when connecting to database etc). Any ideas?

    Read the article

  • AWS: setting up auto-scale for EC2 instances

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/16/aws-setting-up-auto-scale-for-ec2-instances.aspxWith Amazon Web Services, there’s no direct equivalent to Azure Worker Roles – no Elastic Beanstalk-style application for .NET background workers. But you can get the auto-scale part by configuring an auto-scaling group for your EC2 instance. This is a step-by-step guide, that shows you how to create the auto-scaling configuration, which for EC2 you need to do with the command line, and then link your scaling policies to CloudWatch alarms in the Web console. I’m using queue size as my metric for CloudWatch,  which is a good fit if your background workers are pulling messages from a queue and processing them.  If the queue is getting too big, the “high” alarm will fire and spin up a new instance to share the workload. If the queue is draining down, the “low” alarm will fire and shut down one of the instances. To start with, you need to manually set up your app in an EC2 VM, for a background worker that would mean hosting your code in a Windows Service (I always use Topshelf). If you’re dual-running Azure and AWS, then you can isolate your logic in one library, with a generic entry point that has Start() and Stop()  functions, so your Worker Role and Windows Service are essentially using the same code. When you have your instance set up with the Windows Service running automatically, and you’ve tested it starts up and works properly from a reboot, shut the machine down and take an image of the VM, using Create Image (EBS AMI) from the Web Console: When that completes, you’ll have your own AMI which you can use to spin up new instances, and you’re ready to create your auto-scaling group. You need to dip into the command-line tools for this, so follow this guide to set up the AWS autoscale command line tool. Now we’re ready to go. 1. Create a launch configuration This launch configuration tells AWS what to do when a new instance needs to be spun up. You create it with the as-create-launch-config command, which looks like this: as-create-launch-config sc-xyz-launcher # name of the launch config --image-id ami-7b9e9f12 # id of the AMI you extracted from your VM --region eu-west-1 # which region the new instance gets created in --instance-type t1.micro # size of the instance to create --group quicklaunch-1 #security group for the new instance 2. Create an auto-scaling group The auto-scaling group links to the launch config, and defines the overall configuration of the collection of instances: as-create-auto-scaling-group sc-xyz-asg # auto-scaling group name --region eu-west-1 # region to create in --launch-configuration sc-xyz-launcher # name of the launch config to invoke for new instances --min-size 1 # minimum number of nodes in the group --max-size 5 # maximum number of nodes in the group --default-cooldown 300 # period to wait (in seconds) after each scaling event, before checking if another scaling event is required --availability-zones eu-west-1a eu-west-1b eu-west-1c # which availability zones you want your instances to be allocated in – multiple entries means EC@ will use any of them 3. Create a scale-up policy The policy dictates what will happen in response to a scaling event being triggered from a “high” alarm being breached. It links to the auto-scaling group; this sample results in one additional node being spun up: as-put-scaling-policy scale-up-policy # policy name -g sc-psod-woker-asg # auto-scaling group the policy works with --adjustment 1 # size of the adjustment --region eu-west-1 # region --type ChangeInCapacity # type of adjustment, this specifies a fixed number of nodes, but you can use PercentChangeInCapacity to make an adjustment relative to the current number of nodes, e.g. increasing by 50% 4. Create a scale-down policy The policy dictates what will happen in response to a scaling event being triggered from a “low” alarm being breached. It links to the auto-scaling group; this sample results in one node from the group being taken offline: as-put-scaling-policy scale-down-policy -g sc-psod-woker-asg "--adjustment=-1" # in Windows, use double-quotes to surround a negative adjustment value –-type ChangeInCapacity --region eu-west-1 5. Create a “high” CloudWatch alarm We’re done with the command line now. In the Web Console, open up the CloudWatch view and create a new alarm. This alarm will monitor your metrics and invoke the scale-up policy from your auto-scaling group, when the group is working too hard. Configure your metric – this example will fire the alarm if there are more than 10 messages in my queue for over a minute: Then link the alarm to the scale-up policy in your group: 6. Create a “low” CloudWatch alarm The opposite of step 4, this alarm will trigger when the instances in your group don’t have enough work to do (e.g fewer than 2 messages in the queue for 1 minute), and will invoke the scale-down policy. And that’s it. You don’t need your original VM as the auto-scale group has a minimum number of nodes connected. You can test out the scaling by flexing your CloudWatch metric – in this example, filling up a queue from a  stub publisher – and watching AWS create new nodes as required, then stopping the publisher and watch AWS kill off the spare nodes.

    Read the article

  • Slow boot on Ubuntu 12.04

    - by Hailwood
    My Ubuntu is booting really slow (Windows is booting faster...). I am using Ubuntu a Dell Inspiron 1545 Pentium(R) Dual-Core CPU T4300 @ 2.10GHz, 4GB Ram, 500GB HDD running Ubuntu 12.04 with gnome-shell 3.4.1. After running dmesg the culprit seems to be this section, in particular the last three lines: [26.557659] ADDRCONF(NETDEV_UP): eth0: link is not ready [26.565414] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.355355] Console: switching to colour frame buffer device 170x48 [27.362346] fb0: radeondrmfb frame buffer device [27.362347] drm: registered panic notifier [27.362357] [drm] Initialized radeon 2.12.0 20080528 for 0000:01:00.0 on minor 0 [27.617435] init: udev-fallback-graphics main process (1049) terminated with status 1 [30.064481] init: plymouth-stop pre-start process (1500) terminated with status 1 [51.708241] CE: hpet increased min_delta_ns to 20113 nsec [59.448029] eth2: no IPv6 routers present But I have no idea how to start debugging this. sudo lshw -C video $ sudo lshw -C video *-display description: VGA compatible controller product: RV710 [Mobility Radeon HD 4300 Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:48 memory:e0000000-efffffff ioport:de00(size=256) memory:f6df0000-f6dfffff memory:f6d00000-f6d1ffff After loading the propriety driver my new dmesg log is below (starting from the first major time gap): [2.983741] EXT4-fs (sda6): mounted filesystem with ordered data mode. Opts: (null) [25.094327] ADDRCONF(NETDEV_UP): eth0: link is not ready [25.119737] udevd[520]: starting version 175 [25.167086] lp: driver loaded but no devices found [25.215341] fglrx: module license 'Proprietary. (C) 2002 - ATI Technologies, Starnberg, GERMANY' taints kernel. [25.215345] Disabling lock debugging due to kernel taint [25.231924] wmi: Mapper loaded [25.318414] lib80211: common routines for IEEE802.11 drivers [25.318418] lib80211_crypt: registered algorithm 'NULL' [25.331631] [fglrx] Maximum main memory to use for locked dma buffers: 3789 MBytes. [25.332095] [fglrx] vendor: 1002 device: 9552 count: 1 [25.334206] [fglrx] ioport: bar 1, base 0xde00, size: 0x100 [25.334229] pci 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [25.334235] pci 0000:01:00.0: setting latency timer to 64 [25.337109] [fglrx] Kernel PAT support is enabled [25.337140] [fglrx] module loaded - fglrx 8.96.4 [Mar 12 2012] with 1 minors [25.342803] Adding 4189180k swap on /dev/sda7. Priority:-1 extents:1 across:4189180k [25.364031] type=1400 audit(1338241723.027:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=606 comm="apparmor_parser" [25.364491] type=1400 audit(1338241723.031:3): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=606 comm="apparmor_parser" [25.364760] type=1400 audit(1338241723.031:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=606 comm="apparmor_parser" [25.394328] wl 0000:0c:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [25.394343] wl 0000:0c:00.0: setting latency timer to 64 [25.415531] acpi device:36: registered as cooling_device2 [25.416688] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A03:00/device:34/LNXVIDEO:00/input/input6 [25.416795] ACPI: Video Device [VID] (multi-head: yes rom: no post: no) [25.416865] [Firmware Bug]: Duplicate ACPI video bus devices for the same VGA controller, please try module parameter "video.allow_duplicates=1"if the current driver doesn't work. [25.425133] lib80211_crypt: registered algorithm 'TKIP' [25.448058] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [25.448321] snd_hda_intel 0000:00:1b.0: irq 47 for MSI/MSI-X [25.448353] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [25.738867] eth1: Broadcom BCM4315 802.11 Hybrid Wireless Controller 5.100.82.38 [25.761213] input: HDA Intel Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7 [25.761406] input: HDA Intel Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8 [25.783432] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2) [25.908318] EXT4-fs (sda6): re-mounted. Opts: errors=remount-ro [25.928155] input: Dell WMI hotkeys as /devices/virtual/input/input9 [25.960561] udevd[543]: renamed network interface eth1 to eth2 [26.285688] init: failsafe main process (835) killed by TERM signal [26.396426] input: PS/2 Mouse as /devices/platform/i8042/serio2/input/input10 [26.423108] input: AlpsPS/2 ALPS GlidePoint as /devices/platform/i8042/serio2/input/input11 [26.511297] Bluetooth: Core ver 2.16 [26.511383] NET: Registered protocol family 31 [26.511385] Bluetooth: HCI device and connection manager initialized [26.511388] Bluetooth: HCI socket layer initialized [26.511391] Bluetooth: L2CAP socket layer initialized [26.512079] Bluetooth: SCO socket layer initialized [26.530164] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [26.530168] Bluetooth: BNEP filters: protocol multicast [26.553893] type=1400 audit(1338241724.219:5): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=928 comm="apparmor_parser" [26.554860] Bluetooth: RFCOMM TTY layer initialized [26.554866] Bluetooth: RFCOMM socket layer initialized [26.554868] Bluetooth: RFCOMM ver 1.11 [26.557910] type=1400 audit(1338241724.223:6): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=927 comm="apparmor_parser" [26.559166] type=1400 audit(1338241724.223:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=928 comm="apparmor_parser" [26.559574] type=1400 audit(1338241724.223:8): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=928 comm="apparmor_parser" [26.575519] type=1400 audit(1338241724.239:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=931 comm="apparmor_parser" [26.581100] type=1400 audit(1338241724.247:10): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=931 comm="apparmor_parser" [26.582794] type=1400 audit(1338241724.247:11): apparmor="STATUS" operation="profile_load" name="/usr/bin/evince" pid=929 comm="apparmor_parser" [26.605672] ppdev: user-space parallel port driver [27.592475] sky2 0000:09:00.0: eth0: enabling interface [27.604329] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.606962] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.852509] vesafb: mode is 1024x768x32, linelength=4096, pages=0 [27.852513] vesafb: scrolling: redraw [27.852515] vesafb: Truecolor: size=0:8:8:8, shift=0:16:8:0 [27.852523] mtrr: type mismatch for e0000000,400000 old: write-back new: write-combining [27.852527] mtrr: type mismatch for e0000000,200000 old: write-back new: write-combining [27.852531] mtrr: type mismatch for e0000000,100000 old: write-back new: write-combining [27.852534] mtrr: type mismatch for e0000000,80000 old: write-back new: write-combining [27.852538] mtrr: type mismatch for e0000000,40000 old: write-back new: write-combining [27.852541] mtrr: type mismatch for e0000000,20000 old: write-back new: write-combining [27.852544] mtrr: type mismatch for e0000000,10000 old: write-back new: write-combining [27.852548] mtrr: type mismatch for e0000000,8000 old: write-back new: write-combining [27.852551] mtrr: type mismatch for e0000000,4000 old: write-back new: write-combining [27.852554] mtrr: type mismatch for e0000000,2000 old: write-back new: write-combining [27.852558] mtrr: type mismatch for e0000000,1000 old: write-back new: write-combining [27.853154] vesafb: framebuffer at 0xe0000000, mapped to 0xffffc90005580000, using 3072k, total 3072k [27.853405] Console: switching to colour frame buffer device 128x48 [27.853426] fb0: VESA VGA frame buffer device [28.539800] fglrx_pci 0000:01:00.0: irq 48 for MSI/MSI-X [28.540552] [fglrx] Firegl kernel thread PID: 1168 [28.540679] [fglrx] Firegl kernel thread PID: 1169 [28.540789] [fglrx] Firegl kernel thread PID: 1170 [28.540932] [fglrx] IRQ 48 Enabled [29.845620] [fglrx] Gart USWC size:1236 M. [29.845624] [fglrx] Gart cacheable size:489 M. [29.845629] [fglrx] Reserved FB block: Shared offset:0, size:1000000 [29.845632] [fglrx] Reserved FB block: Unshared offset:fc21000, size:3df000 [29.845635] [fglrx] Reserved FB block: Unshared offset:1fffb000, size:5000 [59.700023] eth2: no IPv6 routers present

    Read the article

  • Patch an Existing NK.BIN

    - by Kate Moss' Open Space
    As you know, we can use MAKEIMG.EXE tool to create OS Image file, NK.BIN, or ROMIMAGE.EXE with a BIB for more accurate. But what if the image file is already created but need to be patched or you want to extract a file from NK.BIN? The Platform Builder provide many useful command line utilities, and today I am going to introduce one, BINMOD.EXE. http://msdn.microsoft.com/en-us/library/ee504622.aspx is the official page for BINMOD tool. As the page says, The BinMod Tool (binmod.exe) extracts files from a run-time image, and replaces files in a run-time image and its usage binmod [-i imagename] [-r replacement_filename.ext | -e extraction_filename.ext] This is a simple tool and is easy to use, if we want to extract a file from nk.bin, just type binmod –i nk.bin –e filename.ext And that's it! Or use can try -r command to replace a file inside NK.BIN. The small tool is good but there is a limitation; due to the files in MODULES section are fixed up during ROMIMAGE so the original file format is not preserved, therefore extract or replace file in MODULE section will be impossible. So just like this small tool, this post supposed to be end here, right? Nah... It is not that easy. Just try the above example, and you will find, the tool is not work! Double check the file is in FILES section and the NK.BIN is good, but it just quits. Before you throw away this useless toy, we can try to fix it! Yes, the source of this tool is available in your CE6, private\winceos\COREOS\nk\tools\romimage\binmod. As it is a tool run in your Windows so you need to Windows SDK or Visual Studio to build the code. (I am going to save you some time by skipping the detail as building a desktop console mode program is fairly trivial) The cbinmod.cpp is the core logic for this program and follow up the error message we got, it looks like the following code is suspected.   //   // Extra sanity check...   //   if((DWORD)(HIWORD(pTOCLoc->dllfirst) << 16) <= pTOCLoc->dlllast &&       (DWORD)(LOWORD(pTOCLoc->dllfirst) << 16) <= pTOCLoc->dlllast)   {     dprintf("Found pTOC  = 0x%08x\n", (DWORD)dwpTOC);     fFoundIt = true;     break;   }    else    {     dprintf("NOTICE! Record %d looked like a TOC except DLL first = 0x%08X, and DLL last = 0x%08X\r\n", i, pTOCLoc->dllfirst, pTOCLoc->dlllast);   } The logic checks if dllfirst <= dlllast but look closer, the code only separated the high/low WORD from dllfirst but does not apply the same to dlllast, is that on purpose or a bug? While the TOC is created by ROMIMAGE.EXE, so let's move to ROMIMAGE. In private\winceos\coreos\nk\tools\romimage\romimage\bin.cpp    Module::s_romhdr.dllfirst  = (HIWORD(xip_mem->dll_data_bottom) << 16) | HIWORD(xip_mem->kernel_dll_bottom);   Module::s_romhdr.dlllast   = (HIWORD(xip_mem->dll_data_top) << 16)    | HIWORD(xip_mem->kernel_dll_top); It is clear now, the high word of dll first is the upper 16 bits of XIP DLL bottom and the low word is the upper 16 bits of kernel dll bottom; also, the high word of dll last is the upper 16 bits of XIP DLL top and the low word is the upper 16 bits of kernel dll top. Obviously, the correct statement should be if((DWORD)(HIWORD(pTOCLoc->dllfirst) << 16) <= (DWORD)(HIWORD(pTOCLoc->dlllast) << 16) &&    (DWORD)(LOWORD(pTOCLoc->dllfirst) << 16) <= (DWORD)(LOWORD(pTOCLoc->dlllast) << 16)) So update the code like this should fix this issue or just like the comment, it is an extra sanity check, you can just get rid of it, either way can make the code moving forward and everything worked as advertised.  "Extracting out copies of files from the nk.bin... replacing files... etc." Since the NK.BIN can be compressed, so the BinMod needs the compress.dll to decompress the data, the DLL can be found in C:\program files\microsoft platform builder\6.00\cepb\idevs\imgutils.

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • GWB | 30 Posts in 60 Days Update

    - by Staff of Geeks
    One month after the contest started, we definitely have some leaders and one blogger who has reached the mark.  Keep up the good work guys, I have really enjoyed the content being produced by our bloggers. Current Winners: Enrique Lima (37 posts) - http://geekswithblogs.net/enriquelima Almost There: Stuart Brierley (28 posts) - http://geekswithblogs.net/StuartBrierley Dave Campbell (26 posts) - http://geekswithblogs.net/WynApseTechnicalMusings Eric Nelson (23 posts) - http://geekswithblogs.net/iupdateable Coming Along: Liam McLennan (17 posts) - http://geekswithblogs.net/liammclennan Christopher House (13 posts) - http://geekswithblogs.net/13DaysaWeek mbcrump (13 posts) - http://geekswithblogs.net/mbcrump Steve Michelotti (10 posts) - http://geekswithblogs.net/michelotti Michael Freidgeim (9 posts) - http://geekswithblogs.net/mnf MarkPearl (9 posts) - http://geekswithblogs.net/MarkPearl Brian Schroer (8 posts) - http://geekswithblogs.net/brians Chris Williams (8 posts) - http://geekswithblogs.net/cwilliams CatherineRussell (7 posts) - http://geekswithblogs.net/CatherineRussell Shawn Cicoria (7 posts) - http://geekswithblogs.net/cicorias Matt Christian (7 posts) - http://geekswithblogs.net/CodeBlog James Michael Hare (7 posts) - http://geekswithblogs.net/BlackRabbitCoder John Blumenauer (7 posts) - http://geekswithblogs.net/jblumenauer Scott Dorman (7 posts) - http://geekswithblogs.net/sdorman   Technorati Tags: Standings,Geekswithblogs,30 in 60

    Read the article

  • Parallelism in .NET – Part 8, PLINQ’s ForAll Method

    - by Reed
    Parallel LINQ extends LINQ to Objects, and is typically very similar.  However, as I previously discussed, there are some differences.  Although the standard way to handle simple Data Parellelism is via Parallel.ForEach, it’s possible to do the same thing via PLINQ. PLINQ adds a new method unavailable in standard LINQ which provides new functionality… LINQ is designed to provide a much simpler way of handling querying, including filtering, ordering, grouping, and many other benefits.  Reading the description in LINQ to Objects on MSDN, it becomes clear that the thinking behind LINQ deals with retrieval of data.  LINQ works by adding a functional programming style on top of .NET, allowing us to express filters in terms of predicate functions, for example. PLINQ is, generally, very similar.  Typically, when using PLINQ, we write declarative statements to filter a dataset or perform an aggregation.  However, PLINQ adds one new method, which provides a very different purpose: ForAll. The ForAll method is defined on ParallelEnumerable, and will work upon any ParallelQuery<T>.  Unlike the sequence operators in LINQ and PLINQ, ForAll is intended to cause side effects.  It does not filter a collection, but rather invokes an action on each element of the collection. At first glance, this seems like a bad idea.  For example, Eric Lippert clearly explained two philosophical objections to providing an IEnumerable<T>.ForEach extension method, one of which still applies when parallelized.  The sole purpose of this method is to cause side effects, and as such, I agree that the ForAll method “violates the functional programming principles that all the other sequence operators are based upon”, in exactly the same manner an IEnumerable<T>.ForEach extension method would violate these principles.  Eric Lippert’s second reason for disliking a ForEach extension method does not necessarily apply to ForAll – replacing ForAll with a call to Parallel.ForEach has the same closure semantics, so there is no loss there. Although ForAll may have philosophical issues, there is a pragmatic reason to include this method.  Without ForAll, we would take a fairly serious performance hit in many situations.  Often, we need to perform some filtering or grouping, then perform an action using the results of our filter.  Using a standard foreach statement to perform our action would avoid this philosophical issue: // Filter our collection var filteredItems = collection.AsParallel().Where( i => i.SomePredicate() ); // Now perform an action foreach (var item in filteredItems) { // These will now run serially item.DoSomething(); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This would cause a loss in performance, since we lose any parallelism in place, and cause all of our actions to be run serially. We could easily use a Parallel.ForEach instead, which adds parallelism to the actions: // Filter our collection var filteredItems = collection.AsParallel().Where( i => i.SomePredicate() ); // Now perform an action once the filter completes Parallel.ForEach(filteredItems, item => { // These will now run in parallel item.DoSomething(); }); This is a noticeable improvement, since both our filtering and our actions run parallelized.  However, there is still a large bottleneck in place here.  The problem lies with my comment “perform an action once the filter completes”.  Here, we’re parallelizing the filter, then collecting all of the results, blocking until the filter completes.  Once the filtering of every element is completed, we then repartition the results of the filter, reschedule into multiple threads, and perform the action on each element.  By moving this into two separate statements, we potentially double our parallelization overhead, since we’re forcing the work to be partitioned and scheduled twice as many times. This is where the pragmatism comes into play.  By violating our functional principles, we gain the ability to avoid the overhead and cost of rescheduling the work: // Perform an action on the results of our filter collection .AsParallel() .Where( i => i.SomePredicate() ) .ForAll( i => i.DoSomething() ); The ability to avoid the scheduling overhead is a compelling reason to use ForAll.  This really goes back to one of the key points I discussed in data parallelism: Partition your problem in a way to place the most work possible into each task.  Here, this means leaving the statement attached to the expression, even though it causes side effects and is not standard usage for LINQ. This leads to my one guideline for using ForAll: The ForAll extension method should only be used to process the results of a parallel query, as returned by a PLINQ expression. Any other usage scenario should use Parallel.ForEach, instead.

    Read the article

  • Google I/O 2010 - Fireside chat with the GWT team

    Google I/O 2010 - Fireside chat with the GWT team Google I/O 2010 - Fireside chat with the GWT team Fireside Chats, GWT Bruce Johnson, Joel Webber, Ray Ryan, Amit Manjhi, Jaime Yap, Kathrin Probst, Eric Ayers, lan Stewart, Christian Dupuis, Chris Ramsdale (moderator) If you're interested in what the GWT team has been up to since 2.0, here's your chance. We'll have several of the core engineers available to discuss the new features and frameworks in GWT, as well as to answer any questions that you might have. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 140 0 ratings Time: 58:32 More in Science & Technology

    Read the article

  • Google présente le Nexus S fabriqué par Samsung, tournant sous Android 2.3 et équipé du NFC

    Google présente le Nexus S fabriqué par Samsung, au design épuré et équipé du NFC Mise à jour du 07.12.2010 par Katleen Cette fois-ci, c'est officiel. Le Nexus One aura un successeur, et c'est bel et bien cet appareil qu'Eric Schmidt avait furtivement montré il y a quelques semaines lors d'une conférence. Le second smartphone estampillé Google a été fabriqué par Samsung, qui a du le fabriquer en respectant scrupuleusement un cahier des charges très précis, en matière de hardware et de design. Son exclusivité ? Etre le premier à tourner sous Android 2.3 (Gingerbread) en version "pure" (non remodelée par les opérateurs, ni par Samsung). Autre grand pas en avant : l'inclusion d...

    Read the article

  • Are VB.NET to C# converters actually compilers?

    - by Rowan Freeman
    Whenever I see programs or scripts that convert between high-level programming languages they are always labelled as converters. "VB.NET to C# converter" on Google results in expected, useful hits. However "VB.NET to C# compiler" on Google results in things like comparisons between the C# and VB.NET compilers and other hits that are not quite what you'd be looking for. Webopedia defines Compiler as A program that translates source code into object code Eric Lipper in an answer to: "How do I create my own programming language and a compiler for it" suggests: One of the best ways to get started writing a compiler is by writing a high-level-language-to-high-level-language compiler. Is a converter really just a compiler? What separates the two?

    Read the article

  • Chrome Apps Office Hours: Storage API Deep Dive

    Chrome Apps Office Hours: Storage API Deep Dive Ask and vote for questions at: goo.gl Join us next week as we take a deeper dive into the new storage APIs available to Chrome Packaged Apps. We've invited Eric Bidelman, author of the HTML5 File System API book to join Paul Kinlan, Paul Lewis, Pete LePage and Renato Dias for our weekly Chrome Apps Office Hours in which we will pick apart some of the sample Chrome Apps and explain how we've used the storage APIs and why we made the decisions we did. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Nicolas Sarkozy souhaite une "Hadopi 3 plus adaptée", car la mouture actuelle n'est "pas parfaite"

    Nicolas Sarkozy souhaite une "Hadopi 3 plus adaptée", car la mouture actuelle n'est "pas parfaite" Mise à jour du 16.12.2010 par Katleen Ce midi, une rencontre informelle entre le Président de la République et des acteurs de l'Internetfrançais était organisée à l'Elysée, à l'occasion du déjeuner. Autour de la table, et de Nicolas Sarkozy, étaient réunis : Eric Dupin (Presse Citron), Maître Eolas, Versac, Jacques-Antoine GRANJON (fondateur de vente-privée.com), Daniel Marhely (fondateur de deezer.com) et Xavier Niel (fondateur d'Illiad). Déjà, pour améliorer la gestion de l'Internet en France, le chef de l'Etat a déclaré vouloir créer un Conseil Numérique «plus formé...

    Read the article

  • Webcast Replay Available: Scrambling Sensitive Data in E-Business Suite Release 12 Cloned Environments

    - by BillSawyer
    I am pleased to release the replay and presentation for ATG Live Webcast Scrambling Sensitive Data in EBS 12 Cloned Environments (Presentation) Eric Bing, Senior Director, Jagan Athreya, Enterprise Manager Product Management, and Elke Phelps, Senior Principal Product Manager, discussed the Oracle E-Business Suite Template for Data Masking Pack, and how it can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers. (July 2012) Finding other recorded ATG webcastsThe catalog of ATG Live Webcast replays, presentations, and all ATG training materials is available in this blog's Webcasts and Training section.

    Read the article

  • Le Conseil National du Numérique sur les rails, conformément aux souhaits de Nicolas Sarkozy

    Le Conseil National du Numérique sur les rails, conformément aux souhaits de Nicolas Sarkozy Mise à jour du 21.12.2010 par Katleen Le Conseil National du Numérique voulu par Nicolas Sarkozy sera bel et bien mise en place. Ce jour, alors qu'il visitait les locaux de PriceMinister, Eric Besson s'est exprimé sur le sujet de cet "un organe de consultation, réunissant acteurs et entreprises du numérique et de l'internet en France". Et son plan de mise en oeuvre est déjà tout tracé. Dès janvier 2011, un groupe de travail constitué des opérateurs, FAI, et autres grands acteurs de l'Internet sera chargé de faire des propositions au Président de la République (ainsi qu'au Premier mini...

    Read the article

  • DevConnections Session Slides, Samples and Links

    - by Rick Strahl
    Finally coming up for air this week, after catching up with being on the road for the better part of three weeks. Here are my slides, samples and links for my four DevConnections Session two weeks ago in Vegas. I ended up doing one extra un-prepared for session on WebAPI and AJAX, as some of the speakers were either delayed or unable to make it at all to Vegas due to Sandy's mayhem. It was pretty hectic in the speaker room as Erik (our event coordinator extrodinaire) was scrambling to fill session slots with speakers :-). Surprisingly it didn't feel like the storm affected attendance drastically though, but I guess it's hard to tell without actual numbers. The conference was a lot of fun - it's been a while since I've been speaking at one of these larger conferences. I'd been taking a hiatus, and I forgot how much I enjoy actually giving talks. Preparing - well not  quite so much, especially since I ended up essentially preparing or completely rewriting for all three of these talks and I was stressing out a bit as I was sick the week before the conference and didn't get as much time to prepare as I wanted to. But - as always seems to be the case - it all worked out, but I guess those that attended have to be the judge of that… It was great to catch up with my speaker friends as well - man I feel out of touch. I got to spend a bunch of time with Dan Wahlin, Ward Bell, Julie Lerman and for about 10 minutes even got to catch up with the ever so busy Michele Bustamante. Lots of great technical discussions including a fun and heated REST controversy with Ward and Howard Dierking. There were also a number of great discussions with attendees, describing how they're using the technologies touched in my talks in live applications. I got some great ideas from some of these and I wish there would have been more opportunities for these kinds of discussions. One thing I miss at these Vegas events though is some sort of coherent event where attendees and speakers get to mingle. These Vegas conferences are just like "go to sessions, then go out and PARTY on the town" - it's Vegas after all! But I think that it's always nice to have at least one evening event where everybody gets to hang out together and trade stories and geek talk. Overall there didn't seem to be much opportunity for that beyond lunch or the small and short exhibit hall events which it seemed not many people actually went to. Anyways, a good time was had. I hope those of you that came to my sessions learned something useful. There were lots of great questions and discussions after the sessions - always appreciate hearing the real life scenarios that people deal with in relation to the abstracted scenarios in sessions. Here are the Session abstracts, a few comments and the links for downloading slides and  samples. It's not quite like being there, but I hope this stuff turns out to be useful to some of you. I'll be following up a couple of these sessions with white papers in the following weeks. Enjoy. ASP.NET Architecture: How ASP.NET Works at the Low Level Abstract:Interested in how ASP.NET works at a low level? ASP.NET is extremely powerful and flexible technology, but it's easy to forget about the core framework that underlies the higher level technologies like ASP.NET MVC, WebForms, WebPages, Web Services that we deal with on a day to day basis. The ASP.NET core drives all the higher level handlers and frameworks layered on top of it and with the core power comes some complexity in the form of a very rich object model that controls the flow of a request through the ASP.NET pipeline from Windows HTTP services down to the application level. To take full advantage of it, it helps to understand the underlying architecture and model. This session discusses the architecture of ASP.NET along with a number of useful tidbits that you can use for building and debugging your ASP.NET applications more efficiently. We look at overall architecture, how requests flow from the IIS (7 and later) Web Server to the ASP.NET runtime into HTTP handlers, modules and filters and finally into high-level handlers like MVC, Web Forms or Web API. Focus of this session is on the low-level aspects on the ASP.NET runtime, with examples that demonstrate the bootstrapping of ASP.NET, threading models, how Application Domains are used, startup bootstrapping, how configuration files are applied and how all of this relates to the applications you write either using low-level tools like HTTP handlers and modules or high-level pages or services sitting at the top of the ASP.NET runtime processing chain. Comments:I was surprised to see so many people show up for this session - especially since it was the last session on the last day and a short 1 hour session to boot. The room was packed and it was to see so many people interested the abstracts of architecture of ASP.NET beyond the immediate high level application needs. Lots of great questions in this talk as well - I only wish this session would have been the full hour 15 minutes as we just a little short of getting through the main material (didn't make it to Filters and Error handling). I haven't done this session in a long time and I had to pretty much re-figure all the system internals having to do with the ASP.NET bootstrapping in light for the changes that came with IIS 7 and later. The last time I did this talk was with IIS6, I guess it's been a while. I love doing this session, mainly because in my mind the core of ASP.NET overall is so cleanly designed to provide maximum flexibility without compromising performance that has clearly stood the test of time in the 10 years or so that .NET has been around. While there are a lot of moving parts, the technology is easy to manage once you understand the core components and the core model hasn't changed much even while the underlying architecture that drives has been almost completely revamped especially with the introduction of IIS 7 and later. Download Samples and Slides   Introduction to using jQuery with ASP.NET Abstract:In this session you'll learn how to take advantage of jQuery in your ASP.NET applications. Starting with an overview of jQuery client features via many short and fun examples, you'll find out about core features like the power of selectors for document element selection, manipulating these elements with jQuery's wrapped set methods in a browser independent way, how to hook up and handle events easily and generally apply concepts of unobtrusive JavaScript principles to client scripting. The second half of the session then delves into jQuery's AJAX features and several different ways how you can interact with ASP.NET on the server. You'll see examples of using ASP.NET MVC for serving HTML and JSON AJAX content, as well as using the new ASP.NET Web API to serve JSON and hypermedia content. You'll also see examples of client side templating/databinding with Handlebars and Knockout. Comments:This session was in a monster of a room and to my surprise it was nearly packed, given that this was a 100 level session. I can see that it's a good idea to continue to do intro sessions to jQuery as there appeared to be quite a number of folks who had not worked much with jQuery yet and who most likely could greatly benefit from using it. Seemed seemed to me the session got more than a few people excited to going if they hadn't yet :-).  Anyway I just love doing this session because it's mostly live coding and highly interactive - not many sessions that I can build things up from scratch and iterate on in an hour. jQuery makes that easy though. Resources: Slides and Code Samples Introduction to jQuery White Paper Introduction to ASP.NET Web API   Hosting the Razor Scripting Engine in Your Own Applications Abstract:The Razor Engine used in ASP.NET MVC and ASP.NET Web Pages is a free-standing scripting engine that can be disassociated from these Web-specific implementations and can be used in your own applications. Razor allows for a powerful mix of code and text rendering that makes it a wonderful tool for any sort of text generation, from creating HTML output in non-Web applications, to rendering mail merge-like functionality, to code generation for developer tools and even as a plug-in scripting engine. In this session, we'll look at the components that make up the Razor engine and how you can bootstrap it in your own applications to hook up templating. You'll find out how to create custom templates and manage Razor requests that can be pre-compiled, detecting page changes and act in ways similar to a full runtime. We look at ways that you can pass data into the engine and retrieve both the rendered output as well as result values in a package that makes it easy to plug Razor into your own applications. Comments:That this session was picked was a bit of a surprise to me, since it's a bit of a niche topic. Even more of a surprise was that during the session quite a few people who attended had actually used Razor externally and were there to find out more about how the process works and how to extend it. In the session I talk a bit about a custom Razor hosting implementation (Westwind.RazorHosting) and drilled into the various components required to build a custom Razor Hosting engine and a runtime around it. This sessions was a bit of a chore to prepare for as there are lots of technical implementation details that needed to be dealt with and squeezing that into an hour 15 is a bit tight (and that aren't addressed even by some of the wrapper libraries that exist). Found out though that there's quite a bit of interest in using a templating engine outside of web applications, or often side by side with the HTML output generated by frameworks like MVC or WebForms. An extra fun part of this session was that this was my first session and when I went to set up I realized I forgot my mini-DVI to VGA adapter cable to plug into the projector in my room - 6 minutes before the session was about to start. So I ended up sprinting the half a mile + back to my room - and back at a full sprint. I managed to be back only a couple of minutes late, but when I started I was out of breath for the first 10 minutes or so, while trying to talk. Musta sounded a bit funny as I was trying to not gasp too much :-) Resources: Slides and Code Samples Westwind.RazorHosting GitHub Project Original RazorHosting Blog Post   Introduction to ASP.NET Web API for AJAX Applications Abstract:WebAPI provides a new framework for creating REST based APIs, but it can also act as a backend to typical AJAX operations. This session covers the core features of Web API as it relates to typical AJAX application development. We’ll cover content-negotiation, routing and a variety of output generation options as well as managing data updates from the client in the context of a small Single Page Application style Web app. Finally we’ll look at some of the extensibility features in WebAPI to customize and extend Web API in a number and useful useful ways. Comments:This session was a fill in for session slots not filled due MIA speakers stranded by Sandy. I had samples from my previous Web API article so decided to go ahead and put together a session from it. Given that I spent only a couple of hours preparing and putting slides together I was glad it turned out as it did - kind of just ran itself by way of the examples I guess as well as nice audience interactions and questions. Lots of interest - and also some confusion about when Web API makes sense. Both this session and the jQuery session ended up getting a ton of questions about when to use Web API vs. MVC, whether it would make sense to switch to Web API for all AJAX backend work etc. In my opinion there's no need to jump to Web API for existing applications that already have a good AJAX foundation. Web API is awesome for real externally consumed APIs and clearly defined application AJAX APIs. For typical application level AJAX calls, it's still a good idea, but ASP.NET MVC can serve most if not all of that functionality just as well. There's no need to abandon MVC (or even ASP.NET AJAX or third party AJAX backends) just to move to Web API. For new projects Web API probably makes good sense for isolation of AJAX calls, but it really depends on how the application is set up. In some cases sharing business logic between the HTML and AJAX interfaces with a single MVC API can be cleaner than creating two completely separate code paths to serve essentially the same business logic. Resources: Slides and Code Samples Sample Code on GitHub Introduction to ASP.NET Web API White Paper© Rick Strahl, West Wind Technologies, 2005-2012Posted in Conferences  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • DBaaS Online Forum - Now available on-demand

    - by Javier Puerta
    The Database-as-a-Service Online Forum  was originally broadcasted on Monday, October 21, 2013, at a US-timezones time. All the content of the forum is now available on-demand for customers and partners to watch and listen to. The content is available on demand here. Watch the on-demand forum to hear from analysts and experts on how companies are beginning to transform with Database as a Service, and learn the prescriptive steps your organization can take to design, deploy, and deliver Database as a Service today   Agenda  Keynote Carl Olofson, Research VP, IDC Juan Loaiza, Senior Vice President, Oracle Systems Technology Todd Kimbriel, Director, State of Texas, eGovernment Division Eric Zonneveld, Oracle Architect, KPN James Anthony, Technology Director, e-DBA Breakout 1: Design DBaaS Alan Levine, Senior Director, Oracle Enterprise Architects Breakout 2: Deploy DBaaS Michael Timpanaro-Perrotta, Director of Product Management, Oracle Breakout 3: Deliver DBaaS  Sudip Datta, Vice President of Product Management, Oracle Closing Session Michelle Malcher, IOUG President Juan Loaiza, Senior Vice President, Oracle Systems Technology

    Read the article

  • DBaaS Online Forum - Now available on-demand

    - by Javier Puerta
    The Database-as-a-Service Online Forum  was originally broadcasted on Monday, October 21, 2013, at a US-timezones time. All the content of the forum is now available on-demand for customers and partners to watch and listen to. The content is available on demand here. Watch the on-demand forum to hear from analysts and experts on how companies are beginning to transform with Database as a Service, and learn the prescriptive steps your organization can take to design, deploy, and deliver Database as a Service today   Agenda  Keynote Carl Olofson, Research VP, IDC Juan Loaiza, Senior Vice President, Oracle Systems Technology Todd Kimbriel, Director, State of Texas, eGovernment Division Eric Zonneveld, Oracle Architect, KPN James Anthony, Technology Director, e-DBA   Breakout 1: Design DBaaS Alan Levine, Senior Director, Oracle Enterprise Architects   Breakout 2: Deploy DBaaS Michael Timpanaro-Perrotta, Director of Product Management, Oracle   Breakout 3: Deliver DBaaS  Sudip Datta, Vice President of Product Management, Oracle   Closing Session Michelle Malcher, IOUG President Juan Loaiza, Senior Vice President, Oracle Systems Technology

    Read the article

  • Why enumerator structs are a really bad idea

    - by Simon Cooper
    If you've ever poked around the .NET class libraries in Reflector, I'm sure you would have noticed that the generic collection classes all have implementations of their IEnumerator as a struct rather than a class. As you will see, this design decision has some rather unfortunate side effects... As is generally known in the .NET world, mutable structs are a Very Bad Idea; and there are several other blogs around explaining this (Eric Lippert's blog post explains the problem quite well). In the BCL, the generic collection enumerators are all mutable structs, as they need to keep track of where they are in the collection. This bit me quite hard when I was coding a wrapper around a LinkedList<int>.Enumerator. It boils down to this code: sealed class EnumeratorWrapper : IEnumerator<int> { private readonly LinkedList<int>.Enumerator m_Enumerator; public EnumeratorWrapper(LinkedList<int> linkedList) { m_Enumerator = linkedList.GetEnumerator(); } public int Current { get { return m_Enumerator.Current; } } object System.Collections.IEnumerator.Current { get { return Current; } } public bool MoveNext() { return m_Enumerator.MoveNext(); } public void Reset() { ((System.Collections.IEnumerator)m_Enumerator).Reset(); } public void Dispose() { m_Enumerator.Dispose(); } } The key line here is the MoveNext method. When I initially coded this, I thought that the call to m_Enumerator.MoveNext() would alter the enumerator state in the m_Enumerator class variable and so the enumeration would proceed in an orderly fashion through the collection. However, when I ran this code it went into an infinite loop - the m_Enumerator.MoveNext() call wasn't actually changing the state in the m_Enumerator variable at all, and my code was looping forever on the first collection element. It was only after disassembling that method that I found out what was going on The MoveNext method above results in the following IL: .method public hidebysig newslot virtual final instance bool MoveNext() cil managed { .maxstack 1 .locals init ( [0] bool CS$1$0000, [1] valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator CS$0$0001) L_0000: nop L_0001: ldarg.0 L_0002: ldfld valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator EnumeratorWrapper::m_Enumerator L_0007: stloc.1 L_0008: ldloca.s CS$0$0001 L_000a: call instance bool [System]System.Collections.Generic.LinkedList`1/Enumerator::MoveNext() L_000f: stloc.0 L_0010: br.s L_0012 L_0012: ldloc.0 L_0013: ret } Here, the important line is 0002 - m_Enumerator is accessed using the ldfld operator, which does the following: Finds the value of a field in the object whose reference is currently on the evaluation stack. So, what the MoveNext method is doing is the following: public bool MoveNext() { LinkedList<int>.Enumerator CS$0$0001 = this.m_Enumerator; bool CS$1$0000 = CS$0$0001.MoveNext(); return CS$1$0000; } The enumerator instance being modified by the call to MoveNext is the one stored in the CS$0$0001 variable on the stack, and not the one in the EnumeratorWrapper class instance. Hence why the state of m_Enumerator wasn't getting updated. Hmm, ok. Well, why is it doing this? If you have a read of Eric Lippert's blog post about this issue, you'll notice he quotes a few sections of the C# spec. In particular, 7.5.4: ...if the field is readonly and the reference occurs outside an instance constructor of the class in which the field is declared, then the result is a value, namely the value of the field I in the object referenced by E. And my m_Enumerator field is readonly! Indeed, if I remove the readonly from the class variable then the problem goes away, and the code works as expected. The IL confirms this: .method public hidebysig newslot virtual final instance bool MoveNext() cil managed { .maxstack 1 .locals init ( [0] bool CS$1$0000) L_0000: nop L_0001: ldarg.0 L_0002: ldflda valuetype [System]System.Collections.Generic.LinkedList`1/Enumerator EnumeratorWrapper::m_Enumerator L_0007: call instance bool [System]System.Collections.Generic.LinkedList`1/Enumerator::MoveNext() L_000c: stloc.0 L_000d: br.s L_000f L_000f: ldloc.0 L_0010: ret } Notice on line 0002, instead of the ldfld we had before, we've got a ldflda, which does this: Finds the address of a field in the object whose reference is currently on the evaluation stack. Instead of loading the value, we're loading the address of the m_Enumerator field. So now the call to MoveNext modifies the enumerator stored in the class rather than on the stack, and everything works as expected. Previously, I had thought enumerator structs were an odd but interesting feature of the BCL that I had used in the past to do linked list slices. However, effects like this only underline how dangerous mutable structs are, and I'm at a loss to explain why the enumerators were implemented as structs in the first place. (interestingly, the SortedList<TKey, TValue> enumerator is a struct but is private, which makes it even more odd - the only way it can be accessed is as a boxed IEnumerator!). I would love to hear people's theories as to why the enumerators are implemented in such a fashion. And bonus points if you can explain why LinkedList<int>.Enumerator.Reset is an explicit implementation but Dispose is implicit... Note to self: never ever ever code a mutable struct.

    Read the article

  • Fiddler Inspector for Federation Messages

    - by Your DisplayName here!
    Fiddler is a very useful tool for troubleshooting all kinds of HTTP(s) communications. It also features various extensibility points to make it even more useful. Using the inspector extensibility mechanism, I quickly knocked up an inspector for typical federation messages (thanks for Eric Lawrence btw). Below is a screenshot for WS-Federation. I also added support for SAML 2.0p request/response messages: The inspector can be downloaded from the identitymodel Codeplex site. Simply copy the binary to the inspector folder in the Fiddler directory.

    Read the article

  • Google Buzz essuie les critiques de 10 pays, qui ont co-signé une lettre officielle

    Mise à jour du 22.04.2010 par Katleen Google Buzz essuie les critiques de 10 pays, qui ont co-signé une lettre officielle La Commission nationale de l'informatique et des libertés (CNIL) a suivi de très près le lancement de Google Buzz. Et, très vite, des mécontentements sont arrivés. C'est pourquoi, à peine deux mois après l'arrivée de ce nouveau service communautaire, la CNIL à envoyé un courrier plutôt salé à Eric Schmidt, CEO de Google. Mais la missive se veut encore plus générale, elle s'adresse à "toutes les entreprises en ligne" et leur demande de respecter "le droit à la vie privée des citoyens du monde". Co-signé par dix autorités de ...

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >