Search Results

Search found 15898 results on 636 pages for 'auto properties'.

Page 467/636 | < Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >

  • MVC 2 jQuery Client-side Validation

    - by nmarun
    Well, I watched Phil Haack’s show What's New in Microsoft ASP.NET MVC 2 and was impressed about the client-side validation (starts at 17:45) that MVC 2 offers. I tried creating the same, but Phil does not show what .js files need to be included and also I was not able to find the source code for the application that he used. In order to find out the required JavaScript file references, I added all of the files in my application to the page and ran it. Of course it worked, but this is definitely not an optimum solution. By removing one at a time and testing the app, I’ve short-listed the following ones: 1: <script src="../../Scripts/jquery-1.4.1.min.js" type="text/javascript"></script 2: <script src="../../Scripts/MicrosoftAjax.js" type="text/javascript"></script> 3: <script src="../../Scripts/MicrosoftMvcValidation.js" type="text/javascript"></script> Now, a little about the feature itself. Say, I’m working with a Book application so my model will look something like: 1: public class Book 2: { 3: [HiddenInput(DisplayValue = false)] 4: public int BookId { get; set; } 5:  6: [DisplayName("Book Title")] 7: [Required(ErrorMessage = "Book title is required")] 8: [StringLength(20, ErrorMessage = "Must be under 20 characters")] 9: public string Title { get; set; } 10:  11: [Required(ErrorMessage = "Author is required")] 12: [StringLength(40, ErrorMessage = "Must be under 40 characters")] 13: public string Author { get; set; } 14:  15: public decimal Price { get; set; } 16: 17: [DisplayName("ISBN")] 18: [StringLength(13, ErrorMessage = "Must be 13 characters")] 19: public string Isbn { get; set; } 20: } This ensures that the data passed will be validated upon post. But what would happen if you add the line (along with the above mentioned .js files): 1: <% Html.EnableClientValidation(); %> Now, this acts as ‘on-the-fly’ or ‘real-time’ validation. Now, when the user types 20 characters for the Title, the error shows up right on the 21st character. Beautiful… and you do not have to create the JavaScript function(s) for this. They’re auto-magically created for you. (Doing a ‘View Source’ on the browser page shows you the JavaScript logic that goes on behind the scenes). I bumped into another post that shows how .net 4 allows us to create custom validation attributes: Dynamic Range validation in MVC 2. This will help us attach virtually any business logic to the model itself. Please see the source code I’ve worked with.

    Read the article

  • JavaFX 2.0 at Devoxx 2011

    - by Janice J. Heiss
    JavaFX Sessions Abound JavaFX had a big presence at Devoxx 2011 as witnessed by the number of sessions this year given by leading JavaFX movers and shakers.     “JavaFX 2.0 -- A Java Developer's Guide” by Java Champions Stephen Chin and Peter Pilgrim     “JavaFX 2.0 Hands On” by Jasper Potts and Richard Bair     “Animation Bringing your User Interfaces to Life” by Michael Heinrichs and John Yoong (JavaFX development team)     “Complete Guide to Writing Custom Bindings in JavaFX 2.0” by Michael Heinrichs (JavaFX development team)     “Java Rich Clients with JavaFX 2.0” by Jasper Potts and Richard Bair     “JavaFX Properties & Bindings for Experts” (and those who want to become experts) by Michael Heinrichs (JavaFX development team)     “JavaFX Under the Hood” by Richard Bair     “JavaFX Open Mic” with Jasper Potts and Richard Bair With the release of JavaFX 2.0 and Oracle’s move towards an open development model with an open bug database already created, it’s a great time for developers to take the JavaFX plunge. One Devoxx attendee, Mark Stephens, a developer at IDRsolutions blogged about a problem he was having setting up JavaFX on NetBeans to work on his Mac. He wrote: “I’ve tried desperate measures (I even read and reread the instructions) but it did not help. Luckily, I am at Devoxx at the moment and there seem to be a lot of JavaFX gurus here (and it is running on all their Macs). So I asked them… It turns out that sometimes the software does not automatically pickup the settings like it should do if you give it the JavaFX SDK path. The solution is actually really simple (isn’t it always once you know). Enter these values manually and it will work.” He simply entered certain values and his problem was solved. He thanked Java Champion Stephen Chin, “for a great talk at Devoxx and putting me out of my misery.” JavaFX in Java Magazine Over in the November/December 2011 issue of Java Magazine, Oracle’s Simon Ritter, well known for his creative Java inventions at JavaOne, has an article up titled “JavaFX and Swing Integration” in which he shows developers how to use the power of JavaFX to migrate Swing interfaces to JavaFX. The consensus among JavaFX experts is that JavaFX is the next step in the evolution of Java as a rich client platform. In the same issue Java Champion and JavaFX maven James Weaver has an article, “Using Transitions for Animation in JavaFX 2.0”. In addition, Oracle’s Vice President of Java Client Development, Nandini Ramani, provides the keys to unlock the mysteries of JavaFX 2.0 in her Java Magazine interview. Look for the JavaFX community to grow and flourish in coming years.

    Read the article

  • No audio with headphones, but audio works with integrated speakers

    - by Pedro
    My speakers work correctly, but when I plug in my headphones, they don't work. I am running Ubuntu 10.04. My audio card is Realtek ALC259 My laptop model is a HP G62t a10em In another thread someone fixed a similar issue (headphones work, speakers not) folowing this: sudo vi /etc/modprobe.d/alsa-base.conf (or some other editor instead of Vi) Append the following at the end of the file: alias snd-card-0 snd-hda-intel options snd-hda-intel model=auto Reboot but it doesnt work for me. Before making and changes to alsa, this was the output: alsamixer gives me this: Things I did: followed this HowTo but now no hardware seems to be present (before, there were 2 items listed): Now, alsamixer gives me this: alsamixer: relocation error: alsamixer: symbol snd_mixer_get_hctl, version ALSA_0.9 not defined in file libasound.so.2 with link time reference I guess there was and error in the alsa-driver install so I began reinstalling it. cd alsa-driver* //this works fine// sudo ./configure --with-cards=hda-intel --with-kernel=/usr/src/linux-headers-$(uname -r) //this works fine// sudo make //this doesn't work. see ouput error below// sudo make install Final lines of sudo make: hpetimer.c: In function ‘snd_hpet_open’: hpetimer.c:41: warning: implicit declaration of function ‘hpet_register’ hpetimer.c:44: warning: implicit declaration of function ‘hpet_control’ hpetimer.c:44: error: expected expression before ‘unsigned’ hpetimer.c: In function ‘snd_hpet_close’: hpetimer.c:51: warning: implicit declaration of function ‘hpet_unregister’ hpetimer.c:52: error: invalid use of undefined type ‘struct hpet_task’ hpetimer.c: In function ‘hpetimer_init’: hpetimer.c:88: error: ‘EINVAL’ undeclared (first use in this function) hpetimer.c:99: error: invalid use of undefined type ‘struct hpet_task’ hpetimer.c:100: error: invalid use of undefined type ‘struct hpet_task’ hpetimer.c: At top level: hpetimer.c:121: warning: excess elements in struct initializer hpetimer.c:121: warning: (near initialization for ‘__param_frequency’) hpetimer.c:121: warning: excess elements in struct initializer hpetimer.c:121: warning: (near initialization for ‘__param_frequency’) hpetimer.c:121: warning: excess elements in struct initializer hpetimer.c:121: warning: (near initialization for ‘__param_frequency’) hpetimer.c:121: warning: excess elements in struct initializer hpetimer.c:121: warning: (near initialization for ‘__param_frequency’) hpetimer.c:121: error: extra brace group at end of initializer hpetimer.c:121: error: (near initialization for ‘__param_frequency’) hpetimer.c:121: warning: excess elements in struct initializer hpetimer.c:121: warning: (near initialization for ‘__param_frequency’) make[1]: *** [hpetimer.o] Error 1 make[1]: Leaving directory `/usr/src/alsa/alsa-driver-1.0.9/acore' make: *** [compile] Error 1 And then sudo make install gives me: rm -f /lib/modules/0.0.0/misc/snd*.*o /lib/modules/0.0.0/misc/persist.o /lib/modules/0.0.0/misc/isapnp.o make[1]: Entering directory `/usr/src/alsa/alsa-driver-1.0.9/acore' mkdir -p /lib/modules/0.0.0/misc cp snd-hpet.o snd-page-alloc.o snd-pcm.o snd-timer.o snd.o /lib/modules/0.0.0/misc cp: cannot stat `snd-hpet.o': No such file or directory cp: cannot stat `snd-page-alloc.o': No such file or directory cp: cannot stat `snd-pcm.o': No such file or directory cp: cannot stat `snd-timer.o': No such file or directory cp: cannot stat `snd.o': No such file or directory make[1]: *** [_modinst__] Error 1 make[1]: Leaving directory `/usr/src/alsa/alsa-driver-1.0.9/acore' make: *** [install-modules] Error 1 [SOLUTION] After screwing it all up, someone mentioned why not trying using the packages in Synaptic - so I did. I have reinstalled the following packages and rebooter: -alsa-hda-realtek-ignore-sku-dkms -alsa-modules-2.6.32-25-generic -alsa-source -alsa-utils -linux-backports-modules-alsa-lucid-generic -linux-backports-modules-alsa-lucid-generic-pae -linux-sound-base -(i think i listed them all) After rebooting, the audio worked, both in speakers and headphones. I have no idea which is the package that made my audio work, but it certainly was one of them. [/SOLUTION]

    Read the article

  • help: cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that has contained my 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. I also tried to install on two older 250GB seagate drives in the same computer. During every attempt I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). Installation takes about 30 minutes on the 250GB drives, and about 60 minutes on the 3TB drive - not sure why. When I install on the 250GB drives, the install process finishes, the computer reboots (after the install DVD is removed), but I get a grub error 15. It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB swap 8GB /boot ext4 3TB / ext4 But I've also tried the following, just in case it matters: 100MB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue Note: I have two DVD burners in the system, hence the duplicate line above. The same install and reboot on the 250GB drives generates "grub error 15". Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04) HDD == seagate 1TB SATA3 @ 7200 rpm (current install 64-bit v10.04) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) The current ubuntu 64-bit v10.04 system boots and runs fine on a seagate 1TB.

    Read the article

  • Add Global Hotkeys to Windows Media Player

    - by DigitalGeekery
    Do you use Windows Media Player in the background while working in other applications? The WMP Keys plug-in for Media Player adds global keyboard shortcuts that allow you to control Media Player even when it isn’t in focus. Windows Media Player has a slew of keyboard shortcuts that work only when the media player is active, but these shortcuts stop working once WMP is no longer in focus or minimized. WMP Keys add the following default global hotkeys for Windows Media Player 10, 11, and 12. Ctrl+Alt+Home – Play / Pause Ctrl+Alt+Right – Next track Ctrl+Alt+Left – Previous track Ctrl+Alt+Up Arrow Key – Volume Up Ctrl+Alt+Down Arrow Key – Volume Down Ctrl+Alt+F – Fast Forward Ctrl+Alt+B – Fast Backward Ctrl+Alt+[1-5] – Rate 1-5 stars Note: Tapping Ctrl+Alt+F and Ctrl+Alt+B will skip ahead or back in 5 second intervals. Close out of Windows Media Player and then download and install WMP Keys (link below). After you’ve installed WMP Keys, you’ll need to enable it. Select Organize and then Options… In the Options window, select the Plug-ins tab, click Background in the Category window, then check the box for Wmpkeys Plugin. Click OK to save and exit. You can also enable the plug-in by selecting Tools > Plug-ins and clicking Wmpkeys Plugin. You to view and edit the global hotkeys in the WMPKeys settings window. Select Tools > Plug-in properties and click Wmpkeys Plugin. Below you can see all the default WMP Keys shortcuts.   To change any of the shortcuts, select the text box then press the new keyboard shortcut. Click OK when finished. WMP Keys is very simple little plug-in that makes using WMP while you’re multitasking just a little bit easier and more efficient.  Looking for more plugins for Windows Media Player? Check out our previous articles on adding new features with Media Player Plus, and displaying song lyrics with Lyrics Plugin. Download WMP Keys Similar Articles Productive Geek Tips Built-in Quick Launch Hotkeys in Windows VistaFixing When Windows Media Player Library Won’t Let You Add FilesKantaris is a Unique Media Player Based on VLCInstall and Use the VLC Media Player on Ubuntu LinuxAssign Keyboard Media Keys to Work in Winamp TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter

    Read the article

  • OOW 2013 Summary for Fusion Middleware Architects & Administrators by Simon Haslam

    - by JuergenKress
    OOW 2013 Summary for Fusion Middleware Architects & Administrators by Simon Haslam This September during Oracle OpenWorld 2013 the weather in San Francisco, as you see can from the photo, was exceptionally sunny. The dramatic final few days of the Americas Cup sailing competition, being held every day in the bay, coincided with the conference and meant that there was almost a holiday feel to the whole event. Here's my annual round-up of what I think was most interesting at OpenWorld 2013 for Fusion Middleware architects and administrators; I hope you find it useful and if you think I've missed something please add a comment! WebLogic and Cloud Application Foundation (CAF) The big WebLogic release of the year has already happened a few months ago with 12.1.2 so I won't duplicate that here. Will Lyons discussed the WebLogic and Coherence roadmap which essentially is that 12.1.3 will probably be released to coincide with SOA 12c next year and that 12.1.4, the next feature-rich WebLogic release, is more likely to be in 2015. This latter release will probably include full Java EE 7 support, have enhancements for multi-tenancy and further auto-scaling features to support increased density (i.e. more WebLogic usage for the same amount of hardware). There's a new Oracle Virtual Assembly Builder (OVAB) out already and an Oracle Traffic Director (OTD) 12c release round the corner too. Also of relevance to administrators is that Oracle has increased the support lifetime for Fusion Middleware 11g (e.g. WebLogic 10.3.6) so that Premier Support will now run to the end of 2018 and Extended Support until 2021 - this should remove any Oracle-driven pressure to upgrade at least. Java Mission Control Java Mission Control (JMC) is the HotSpot Java 7 version of JRockit 6 Mission Control, a very nice performance monitoring tool from Oracle's BEA acquisition. Flight Recorder is a feature built into the JVM which records diagnostic events into, typically, a circular buffer which can then be used for historical analysis, particularly in the case of a JVM crash or hang. It's been available separately for WebLogic only for perhaps a year now but, more significantly, it now includes JVM events and was bundled in with JDK7 Update 40 a few weeks ago. I attended a couple of interesting Java One sessions on JMC/Flight Recorder and have to say it's looking really good - it has all the previous JRMC features except for memory leak detector, plus some enhancements around operative sets and ECID filtering I think. Marcus also showed how you could add your own events into flight recorder by building your own event class - they are then available for graphing alongside all the other events in JMC. This uses a currently an unsupported/undocumented API, but it's also the same one that WebLogic uses for WLDF events so I imagine it is stable. I'm not sure quite whether this would be useful to custom applications, as opposed to infrastructure services or ISV packaged applications, but it was a very nice demonstration. I've been testing JMC / FR enabling on several environments recently and my confidence is growing - it feels robust and I think could very soon be part of my standard builds. Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: OOW,Simon Haslam,Oracle OpenWorld,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • No database selected error in CodeIgniter running on MAMP stack

    - by Apophenia Overload
    First off, does anyone know of a good place to get help with CodeIgniter? The official community forums are somewhat disappointing in terms of getting many responses. I have ci installed on a regular MAMP stack, and I’m working on this tutorial. However, I have only gone through the Created section, and currently I am getting a No database selected error. Model: <?php class submit_model extends Model { function submitForm($school, $district) { $data = array( 'school' => $school, 'district' => $district ); $this->db->insert('your_stats', $data); } } View: <?php $this->load->helper('form'); ?> <?php echo form_open('main'); ?> <p> <?php echo form_input('school'); ?> </p> <p> <?php echo form_input('district'); ?> </p> <p> <?php echo form_submit('submit', 'Submit'); ?> </p> <?php echo form_close(); ?> Controller: <?php class Main extends controller { function index() { // Check if form is submitted if ($this->input->post('submit')) { $school = $this->input->xss_clean($this->input->post('school')); $district = $this->input->xss_clean($this->input->post('district')); $this->load->model('submit_model'); // Add the post $this->submit_model->submitForm($school, $district); } $this->load->view('main_view'); } } database.php $db['default']['hostname'] = "localhost:8889"; $db['default']['username'] = "root"; $db['default']['password'] = "root"; $db['default']['database'] = "stats_test"; $db['default']['dbdriver'] = "mysql"; $db['default']['dbprefix'] = ""; $db['default']['pconnect'] = TRUE; $db['default']['db_debug'] = TRUE; $db['default']['cache_on'] = FALSE; $db['default']['cachedir'] = ""; $db['default']['char_set'] = "utf8"; $db['default']['dbcollat'] = "utf8_general_ci"; config.php $config['base_url'] = "http://localhost:8888/ci/"; ... $config['index_page'] = "index.php"; ... $config['uri_protocol'] = "AUTO"; So, how come it’s giving me this error message? A Database Error Occurred Error Number: 1046 No database selected INSERT INTO `your_stats` (`school`, `district`) VALUES ('TJHSST', 'FairFax') Is there any way for me to test if CodeIgniter can actually detect the mySQL databases I've created with phpMyAdmin in my MAMP stack?

    Read the article

  • How to detect and configure an output with xrandr?

    - by ysap
    I have a DELL U2410 monitor connected to a Compaq 100B desktop equipped with an integrated AMD/ATI graphics card (AMD E-350). The installed O/S is Ubuntu 10.04 LTS. The computer is connected to the monitor via the DVI connection. The problem is that I cannot set the desktop resolution to the native 1920x1200. The maximum allowed resolution is 1600x1200. Doing some research I found about the xrandr utility. Unfortunately, when trying to use it I cannot configure it to the required resolution. First, it does not report the output name (which supposed to be DVI-0), saying default instead. Without it I cannot use the --fb option. The EDID utility seems to identify the monitor well. Here's the output from get-edid: # EDID version 1 revision 3 Section "Monitor" # Block type: 2:0 3:ff # Block type: 2:0 3:fc Identifier "DELL U2410" VendorName "DEL" ModelName "DELL U2410" # Block type: 2:0 3:ff # Block type: 2:0 3:fc # Block type: 2:0 3:fd HorizSync 30-81 VertRefresh 56-76 # Max dot clock (video bandwidth) 170 MHz # DPMS capabilities: Active off:yes Suspend:yes Standby:yes Mode "1920x1200" # vfreq 59.950Hz, hfreq 74.038kHz DotClock 154.000000 HTimings 1920 1968 2000 2080 VTimings 1200 1203 1209 1235 Flags "-HSync" "+VSync" EndMode # Block type: 2:0 3:ff # Block type: 2:0 3:fc # Block type: 2:0 3:fd EndSection but the xrandr -q command returns: Screen 0: minimum 640 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 0.0* 1280x1024 0.0 1152x864 0.0 1024x768 0.0 800x600 0.0 640x480 0.0 720x400 0.0 When I try to set the resolution, I get: $ xrandr --fb 1920x1200 xrandr: screen cannot be larger than 1600x1200 (desired size 1920x1200) $ xrandr --output DVI-0 --auto warning: output DVI-0 not found; ignoring How can I set the screen resolution to 1920x1200? Why doesn't xrandr identify the DVI-0 output? Note that the same computer running Ubuntu version higher than 10.04 detects the correct resolution with no problems. On this machine I cannot upgrade due to some legacy hardware compatibility problems. Also, I don't see any optional screen drivers available in the Hardware Drivers dialog. ---- UPDATE: following the answer to this question, I got some advance. Now the required mode is listed in the xrandr -q list, but I can't switch to that mode. Using the Monitors applet (which now shows the new mode), I get the response that: The selected configuration for displays could not be applied. Could not set the configuration to CRTC 262. From the command line it looks like this: $ cvt 1920 1200 60 # 1920x1200 59.88 Hz (CVT 2.30MA) hsync: 74.56 kHz; pclk: 193.25 MHz Modeline "1920x1200_60.00" 193.25 1920 2056 2256 2592 1200 1203 1209 1245 -hsync +vsync $ xrandr --newmode "1920x1200_60.00" 193.25 1920 2056 2256 2592 1200 1203 1209 1245 -hsync +vsync $ xrandr -q Screen 0: minimum 640 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 0.0* 1280x1024 0.0 1152x864 0.0 1024x768 0.0 800x600 0.0 640x480 0.0 720x400 0.0 1920x1200_60.00 (0x120) 193.0MHz h: width 1920 start 2056 end 2256 total 2592 skew 0 clock 74.5KHz v: height 1200 start 1203 end 1209 total 1245 clock 59.8Hz $ xrandr --addmode default 1920x1200_60.00 $ xrandr -q Screen 0: minimum 640 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 0.0* 1280x1024 0.0 1152x864 0.0 1024x768 0.0 800x600 0.0 640x480 0.0 720x400 0.0 1920x1200_60.00 59.8 $ xrandr --output default --mode 1920x1200_60.00 xrandr: Configure crtc 0 failed Another piece of info (if it helps anyone): $ sudo lshw -c video *-display UNCLAIMED description: VGA compatible controller product: ATI Technologies Inc vendor: ATI Technologies Inc physical id: 1 bus info: pci@0000:00:01.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi bus_master cap_list configuration: latency=0 resources: memory:c0000000-cfffffff(prefetchable) ioport:f000(size=256) memory:feb00000-feb3ffff

    Read the article

  • SQL Server source control from Visual Studio

    - by David Atkinson
    Developers have long since had to context switch between two IDEs, Visual Studio for application code development and SQL Server Management Studio for database development. While this is accepted, especially given the richness of the database development feature set in SSMS, loading a separate tool can seem a little overkill. This is where SQL Connect comes in. This is an add-in to Visual Studio that provides a connected development experience for the SQL Server developer. Connected database development involves modifying a development sandbox database, as opposed to offline development, where SQL text files are modified independently of the database. One of the main complaints of Data Dude (VS DBPro) is that it enforces the offline approach. This gripe is what SQL Connect addresses. If you don't already use SQL Source Control, you can get up and running with SQL Connect by adding a new project to your Visual Studio solution as follows: Then choose your existing development database and you're ready to go. If you already use SQL Source Control, you will need to link SQL Connect to your existing database scripts folder repository, so SQL Connect and SQL Source Control can be used collaboratively (note that SQL Source Control v.3.0.9.18 or later is required). Locate the repository (this can be found in the Setup tab in SQL Source Control). .and create a working folder for it (here I'm using TortoiseSVN). Back in Visual Studio, locate the SQL Connect panel (in the View menu if it hasn't auto loaded) and select Import SQL Source Control project Locate your working folder and click Import. This creates a Red Gate database project under your solution: From here you can modify your development database, and manage your changes in source control. To associate your development database with the project, right click on the project node, select Properties, set the database and Save. Now you're ready to make some changes. Locate the object you'd like to modify in the Solution Explorer, and double click it to invoke a query window or table designer. You also have the option to edit the creation SQL directly using Edit SQL File in Project. Keeping the development database and Visual Studio project in sync is as easy as clicking on a button. One you've made your change, you can use whichever mechanism you choose to commit to source control. Here I'm using the free open-source AnkhSVN to integrate Subversion with Visual Studio. Maintaining your database in a Visual Studio solution means that you can commit database changes and application code changes in the same changeset. This is desirable if you have continuous integration set up as you want to ensure that all files related to a change are committed atomically, so you avoid an interim "broken build". More discussion on SQL Connect and its benefits can be found in the following article on Simple Talk: No More Disconnected SQL Development in Visual Studio The SQL Connect project team is currently assessing the backlog for the next development effort, and they'd appreciate your feature suggestions, as well as your votes on their suggestions site: http://redgate.uservoice.com/forums/140800-sql-connect-for-visual-studio- A 28-day free trial of SQL Connect is available from the Red Gate website. Technorati Tags: SQL Server

    Read the article

  • Rights Expiry Options in IRM 11g

    - by martin.abrahams
    Among the many enhancements in IRM 11g, we have introduced a couple of new rights expiry options that may be applied to any role. These options were supported in previous versions, but fell into the "advanced configuration" category. In 11g, the options can be applied simply by selecting a check-box in the properties of a role, as shown by the rather extreme example below, where the role allows access for just two minutes after they are sealed. The new options are: To define a role that expires automatically some period after it is assigned To define a role that evaluates expiry relative to the time that each document is sealed These options supplement the familiar options to allow open-ended access (limited by offline access and the ever-present option to revoke rights at any time) and the option to define time windows with specific start dates and end dates. The value of these options is easiest to illustrate with some publishing examples: You might define a role with a one year expiry to be assigned to users who purchase a one year subscription. For each individual user, the year would be calculated from the time that the role was assigned to them. You might define a role that allows documents to be accessed only for 24 hours from the time that they are published - perhaps as a preview mechanism designed to tempt users to sign up for a full subscription. Upon payment of a full fee, users can simply be reassigned a role that gives them greater access to exactly the same documents. In a corporate environment, you might use such roles for fixed term contractors or for workflows that involve information with a short lifespan, or perhaps as part of a compliance process that requires rights to be formally re-approved at intervals. Being role-based, the time constraints apply to any number of documents - including documents that have not yet been created. For example, a user with a one year subscription would have access to all documents published in the relevant classification during the year without any further configuration. Crucially, unlike other solutions, it is not the documents that expire, but the rights of particular users. Whereas some solutions make documents completely inaccessible for all users after expiry, Oracle IRM can allow some users to continue using documents while other users lose access. Equally crucially, a user whose rights have expired can always be granted fresh rights at any time - for example, because they renew their subscription or because a manager confirms that they still need the rights as part of a corporate compliance process. By applying expiry to rights rather than to documents, Oracle IRM avoids the risk of locking an organization out of its own information.

    Read the article

  • Oracle SOA Security for OUAF Web Services

    - by Anthony Shorten
    With the ability to use Oracle SOA Suite 11g with the Oracle Utilities Application Framework based products, an additional consideration needs to be configured to ensure correct integration. That additional consideration is security. By default, SOA Suite propagates any credentials from the calling application through to the interfacing applications. In most cases, this behavior is not appropriate as the calling application may use different credential stores and also some interfaces are “disconnected” from a calling application (for example, a file based load using the File Adapter). These situations require that the Web Service calls to the Oracle Utilities Application Framework based products have their own valid credentials. To do this the credentials must be attached at design time or at run time to provide the necessary credentials for the call. There are a number of techniques that can be used to do this: At design time, when integrating a Web Service from an Oracle Utilities Application Framework based product you can attach the security policy “oracle/wss_username_token_client_policy” in the composite.xml view. In this view select the Web Service you want to attach the policy to and right click to display the context menu and select “Configure WS Policies” and select the above policy from the list. If you are using SSL then you can use “oracle/wss_username_token_over_ssl_client_policy” instead. At design time, you can also specify the credential key (csf-key) associated with the above policy by selecting the policy and clicking “Edit Config Override Properties”. You name the key appropriately. Everytime the SOA components are deployed the credential configuration is also sent. You can also do this after deployment, or what I call at “runtime”, by specifying the policy and credential key in the Fusion Middleware Control. Refer to the Fusion Middleware Control documentation on how to do this. To complete the configuration you need to add a map and the key specified earlier to the credential store in the Oracle WebLogic instance used for Oracle SOA Suite. From Fusion Middleware Control, you do this by selecting the domain the SOA Suite is installed in a select “Credentials” from the context menu. You now need to add the credentials by adding the map “oracle.wsm.security” (the name is IMPORTANT) and creating a key with the necessary valid credentials. The example below added a key called “mdm.key”. The name I used is for example only. You can name the key anything you like as long as it corresponds to the key you specified in the design time component. Note: I used SYSUSER as an example credentials in the example, in real life you would use another credential as SYSUSER is not appropriate for production use. This key can be reused for other Oracle Utilities Application Framework Web Service integrations or you can use other keys for individual Web Service calls. Once the key is created and the SOA Suite components deployed the transactions should be able to be called as necessary. If you need to change the password for the credentials it can be done using the Fusion Middleware Control functionality.

    Read the article

  • Silverlight Cream for June 16, 2010 -- #884

    - by Dave Campbell
    In this Issue: Zoltan Arvai, Emiel Jongerius, Charles Petzold, Adam Kinney, Deepesh Mohnani, Timmy Kokke, and Damon Payne. Shoutouts: Andy Beaulieu reported his Coding4Fun: Shuffleboard Game for WP7 has been posted -- Big ol' Tutorial and 6 videos of WP7 goodness Karl Shifflett announced Three New WPF and Silverlight Designer Videos Posted Charles Petzold has a cool Flip-Number Clock in Silverlight posted... cool demo, and the source. From SilverlightCream.com: Data Driven Applications with MVVM Part II: Messaging, Unit Testing, and Live Data Sources Zoltan Arvai has part 2 of his Data-Driven Apps with MVVM up, and this one is also including Messaging, Unit Testing, and Live WCF Data... good tutorial and all the code. Silverlight DataContext Changed Event and Trigger Emiel Jongerius takes a hard swing at the lack of DataContextChanged... his solution involves two attached properties instead of one... check it out and see what you think! Orientation Strategies for Windows Phone 7 Charles Petzold is discussing WP7 Orientation... showing the problems you can get involved in, and how to work through them... and you might be surprised at how he does it :) ... pretty cool as usual, Charles! Debugging the TranslateZoomRotate WPF Behavior in Blend Adam Kinney talks through a bug reported about the WPF TranslateZoomRotate Behavior. Again, it's WPF, but it's in Blend, and ya never know when the solution might apply. I want my app to look like the Zune client Deepesh Mohnani demonstrates using the Cosmopolitan theme to get his app to have the same look as the Zune client. MVVM Project and Item Templates Timmy Kokke is continuing with his cool SilverAmp media player, using it to expand upon the new Blend and Silverlight 4 features. This episode touches very lightly on cranking up a new MVVM project in Blend. Great Features for MVVM Friendly Objects Part 0: Favor Composition Over Inheritance Damon Payne has the first part up of a series he's working on with 'MVVM Friendly' features... he's building out a lot of the infrastructure in this post for the ones that follow... all good stuff. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • BI&EPM in Focus June 2013

    - by Mike.Hallett(at)Oracle-BI&EPM
    Analyst Report from Ovum: BI bites into a bigger slice of Oracle’s Red Stack Customers INC Research Ensures 24/7 Enterprise Application Availability and Supports Rapid Expansion in Asia with Managed Cloud Services – Hyperion Planning, PeopleSoft, E-Business Suite, SOA Suite PL Developments Improves Quality and Demand Planning Accuracy, Streamlines Compliance as It Moves into Manufacturing – Hyperion Planning, OBIEE, E-Business Suite Release 12.1, Agile, Demantra Kiabi Provides Store Managers with Monthly Earnings Statements in Four Business Days to Support Continued Retail Growth – Hyperion Planning, Hyperion Financial Reporting, Hyperion Smart View for Office Speedy Cash Improves Global Financial Budgeting and Forecasting to Support Continued Company Growth - Hyperion Planning, Essbase, Hyperion Smart View for Office, Hyperion Financial Management Grupo Sports World Automates and Reduces Budget Consolidation Time by 33% for 30 Fitness Centers – Hyperion Planning Jupiter Shop Channel Automates Budgeting Processes, Enhances Visibility of Project Investments to Support Strategic Decision-Making – Hyperion Planning GENBAND Saves US$1.25 Million Annually with Automated Global Trade Management, Gains Compliance Assurance – Hyperion Financial Management, E-Business Suite Aldar Properties Consolidates and Simplifies Group Planning and Reporting for Business and Finance Structures with Integrated ERP and Business Intelligence – Hyperion Planning, Essbase, Data Integrator, OBIEE, E-Business Suite, SUN Link to Complete Archive Enterprise Performance Management Hyperion EPM 11.1.2.3 Webcast Tutorials EPM Blog: Three Technologies CFOs Need to Know About The CFO as Catalyst for Change - Part 1 The CFO as Catalyst for Change - Part 2 Actions Speak Louder in Scorecards Unlocking Business Potential with Enterprise Performance Management Business Intelligence Oracle Database 12c is launched Analysis: How to Take Big Data Advantage of Oracle Database 12c by Data-informed.com Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • October 2012 Critical Patch Update and Critical Patch Update for Java SE Released

    - by Eric P. Maurice
    Hi, this is Eric Maurice. Oracle has just released the October 2012 Critical Patch Update and the October 2012 Critical Patch Update for Java SE.  As a reminder, the release of security patches for Java SE continues to be on a different schedule than for other Oracle products due to commitments made to customers prior to the Oracle acquisition of Sun Microsystems.  We do however expect to ultimately bring Java SE in line with the regular Critical Patch Update schedule, thus increasing the frequency of scheduled security releases for Java SE to 4 times a year (as opposed to the current 3 yearly releases).  The schedules for the “normal” Critical Patch Update and the Critical Patch Update for Java SE are posted online on the Critical Patch Updates and Security Alerts page. The October 2012 Critical Patch Update provides a total of 109 new security fixes across a number of product families including: Oracle Database Server, Oracle Fusion Middleware, Oracle E-Business Suite, Supply Chain Products Suite, Oracle PeopleSoft Enterprise, Oracle Customer Relationship Management (CRM), Oracle Industry Applications, Oracle FLEXCUBE, Oracle Sun products suite, Oracle Linux and Virtualization, and Oracle MySQL. Out of these 109 new vulnerabilities, 5 affect Oracle Database Server.  The most severe of these Database vulnerabilities has received a CVSS Base Score of 10.0 on Windows platforms and 7.5 on Linux and Unix platforms.  This vulnerability (CVE-2012-3137) is related to the “Cryptographic flaws in Oracle Database authentication protocol” disclosed at the Ekoparty Conference.  Because of timing considerations (proximity to the release date of the October 2012 Critical Patch Update) and the need to extensively test the fixes for this vulnerability to ensure compatibility across the products stack, the fixes for this vulnerability were not released through a Security Alert, but instead mitigation instructions were provided prior to the release of the fixes in this Critical Patch Update in My Oracle Support Note 1492721.1.  Because of the severity of these vulnerabilities, Oracle recommends that this Critical Patch Update be installed as soon as possible. Another 26 vulnerabilities fixed in this Critical Patch Update affect Oracle Fusion Middleware.  The most severe of these Fusion Middleware vulnerabilities has received a CVSS Base Score of 10.0; it affects Oracle JRockit and is related to Java vulnerabilities fixed in the Critical Patch Update for Java SE.  The Oracle Sun products suite gets 18 new security fixes with this Critical Patch Update.  Note also that Oracle MySQL has received 14 new security fixes; the most severe of these MySQL vulnerabilities has received a CVSS Base Score of 9.0. Today’s Critical Patch Update for Java SE provides 30 new security fixes.  The most severe CVSS Base Score for these Java SE vulnerabilities is 10.0 and this score affects 10 vulnerabilities.  As usual, Oracle reports the most severe CVSS Base Score, and these CVSS 10.0s assume that the user running a Java Applet or Java Web Start application has administrator privileges (as is typical on Windows XP). However, when the user does not run with administrator privileges (as is typical on Solaris and Linux), the corresponding CVSS impact scores for Confidentiality, Integrity, and Availability are "Partial" instead of "Complete", typically lowering the CVSS Base Score to 7.5 denoting that the compromise does not extend to the underlying Operating System.  Also, as is typical in the Critical Patch Update for Java SE, most of the vulnerabilities affect Java and Java FX client deployments only.  Only 2 of the Java SE vulnerabilities fixed in this Critical Patch Update affect client and server deployments of Java SE, and only one affects server deployments of JSSE.  This reflects the fact that Java running on servers operate in a more secure and controlled environment.  As discussed during a number of sessions at JavaOne, Oracle is considering security enhancements for Java in desktop and browser environments.  Finally, note that the Critical Patch Update for Java SE is cumulative, in other words it includes all previously released security fixes, including the fix provided through Security Alert CVE-2012-4681, which was released on August 30, 2012. For More Information: The October 2012 Critical Patch Update advisory is located at http://www.oracle.com/technetwork/topics/security/cpuoct2012-1515893.html The October 2012 Critical Patch Update for Java SE advisory is located at http://www.oracle.com/technetwork/topics/security/javacpuoct2012-1515924.html.  An online video about the importance of keeping up with Java releases and the use of the Java auto update is located at http://medianetwork.oracle.com/video/player/1218969104001 More information about Oracle Software Security Assurance is located at http://www.oracle.com/us/support/assurance/index.html  

    Read the article

  • Remove the Lock Icon from a Folder in Windows 7

    - by Trevor Bekolay
    If you’ve been playing around with folder sharing or security options, then you might have ended up with an unsightly lock icon on a folder. We’ll show you how to get rid of that icon without over-sharing it. The lock icon in Windows 7 indicates that the file or folder can only be accessed by you, and not any other user on your computer. If this is desired, then the lock icon is a good way to ensure that those settings are in place. If this isn’t your intention, then it’s an eyesore. To remove the lock icon, we have to change the security settings on the folder to allow the Users group to, at the very least, read from the folder. Right-click on the folder with the lock icon and select Properties. Switch to the Security tab, and then press the Edit… button. A list of groups and users that have access to the folder appears. Missing from the list will be the “Users” group. Click the Add… button. The next window is a bit confusing, but all you need to do is enter “Users” into the text field near the bottom of the window. Click the Check Names button. “Users” will change to the location of the Users group on your particular computer. In our case, this is PHOENIX\Users (PHOENIX is the name of our test machine). Click OK. The Users group should now appear in the list of Groups and Users with access to the folder. You can modify the specific permissions that the Users group has if you’d like – at the minimum, it must have Read access. Click OK. Keep clicking OK until you’re back at the Explorer window. You should now see that the lock icon is gone from your folder! It may be a small aesthetic nuance, but having that one folder stick out in a group of other folders is needlessly distracting. Fortunately, the fix is quick and easy, and does not compromise the security of the folder! Similar Articles Productive Geek Tips What is this "My Sharing Folders" Icon in My Computer and How Do I Remove It?Lock The Screen While in Full-Screen Mode in Windows Media PlayerHave Windows Notify You When You Accidentally Hit the Caps Lock KeyWhy Did Windows Vista’s Music Folder Icon Turn Yellow?Create Shutdown / Restart / Lock Icons in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor

    Read the article

  • Customize Entity Framework SSDL &amp; SQL Generation

    - by Dane Morgridge
    In almost every talk I have done on Entity Framework I get questions on how to do custom SSDL or SQL when using model first development.  Quite a few of these questions have required custom changes to the SSDL, which of course can be a problem if it is getting auto generated.  Luckily, there is a tool that can help.  In the Visual Studio Gallery on MSDN, there is the Entity Designer Database Generation Power Pack. You have the ability to select different generation strategies and it also allows you to inject custom T4 Templates into the generation workflow so that you can customize the SSDL and SQL generation.  When you select to generate a database from a model the dialog is replaced by one with more options:   You can clone the individual workflow for either the current project or current machine.  The templates are installed at “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Extensions\Microsoft\Entity Framework Tools\DBGen” on my local machine and you can make a copy of any template there.  If you clone the strategy and open it up, you will get the following workflow: Each item in the sequence is defining the execution of a T4 template.  The XAML for the workflow is listed below so you can see where the T4 files are defined.  You can simply make a copy of an existing template and make what ever changes you need.   1: <Activity x:Class="GenerateDatabaseScriptWorkflow" ... > 2: <x:Members> 3: <x:Property Name="Csdl" Type="InArgument(sde:EdmItemCollection)" /> 4: <x:Property Name="ExistingSsdl" Type="InArgument(s:String)" /> 5: <x:Property Name="ExistingMsl" Type="InArgument(s:String)" /> 6: <x:Property Name="Ssdl" Type="OutArgument(s:String)" /> 7: <x:Property Name="Msl" Type="OutArgument(s:String)" /> 8: <x:Property Name="Ddl" Type="OutArgument(s:String)" /> 9: <x:Property Name="SmoSsdl" Type="OutArgument(ss:SsdlServer)" /> 10: </x:Members> 11: <Sequence> 12: <dbtk:ProgressBarStartActivity /> 13: <dbtk:CsdlToSsdlTemplateActivity SsdlOutput="[Ssdl]" TemplatePath="$(VSEFTools)\DBGen\CSDLToSSDL_TPT.tt" /> 14: <dbtk:CsdlToMslTemplateActivity MslOutput="[Msl]" TemplatePath="$(VSEFTools)\DBGen\CSDLToMSL_TPT.tt" /> 15: <ded:SsdlToDdlActivity ExistingSsdlInput="[ExistingSsdl]" SsdlInput="[Ssdl]" DdlOutput="[Ddl]" /> 16: <dbtk:GenerateAlterSqlActivity DdlInputOutput="[Ddl]" DeployToScript="True" DeployToDatabase="False" /> 17: <dbtk:ProgressBarEndActivity ClosePopup="true" /> 18: </Sequence> 19: </Activity>   So as you can see, this tool enables you to make some pretty heavy customizations to how the SSDL and SQL get generated.  You can get more info and the tool can be downloaded from: http://visualstudiogallery.msdn.microsoft.com/en-us/df3541c3-d833-4b65-b942-989e7ec74c87.  There is a comments section on the site so make sure you let the team know what you like and what you don’t like.  Enjoy!

    Read the article

  • Using 3G/UMTS in Mauritius

    After some conversation, threads in online forum and mailing lists I thought about writing this article on how to setup, configure and use 3G/UMTS connections on Linux here in Mauritius. Personally, I can only share my experience with Emtel Ltd. but try to give some clues about how to configure Orange as well. Emtel 3G/UMTS surf stick Emtel provides different surf sticks from Huawei. Back in 2007, I started with an E220 that wouldn't run on Windows Vista either. Nowadays, you just plug in the surf stick (ie. E169) and usually the Network Manager will detect the new broadband modem. Nothing to worry about. The Linux Network Manager even provides a connection profile for Emtel here in Mauritius and establishing the Internet connection is done in less than 2 minutes... even quicker. Using wvdial Old-fashioned Linux users might not take Network Manager into consideration but feel comfortable with wvdial. Although that wvdial is primarily used with serial port attached modems, it can operate on USB ports as well. Following is my configuration from /etc/wvdial.conf: [Dialer Defaults]Phone = *99#Username = emtelPassword = emtelNew PPPD = yesStupid Mode = 1Dial Command = ATDT[Dialer emtel]Modem = /dev/ttyUSB0Baud = 3774000Init2 = ATZInit3 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0Init4 = AT+cgdcont=1,"ip","web"ISDN = 0Modem Type = Analog Modem The values of user name and password are optional and can be configured as you like. In case that your SIM card is protected by a pin - which is highly advised, you might another dialer section in your configuration file like so: [Dialer pin]Modem = /dev/ttyUSB0Init1 = AT+CPIN=0000 This way you can "daisy-chain" your command to establish your Internet connection like so: wvdial pin emtel And it works auto-magically. Depending on your group assignments (dialout), you might have to sudo the wvdial statement like so: sudo wvdial pin emtel Orange parameters As far as I could figure out without really testing it myself, it is also necessary to set the Access Point (AP) manually with Orange. Well, although it is pretty obvious a lot of people seem to struggle. The AP value is "orange". [Dialer orange]Modem = /dev/ttyUSB0Baud = 3774000Init2 = ATZInit3 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0Init4 = AT+cgdcont=1,"ip","orange"ISDN = 0Modem Type = Analog Modem And you are done. Official Linux support from providers It's just simple: Forget it! The people at the Emtel call center are completely focused on the hardware and Mobile Connect software application provided by Huawei and are totally lost in case that you confront them with other constellations. For example, my wife's netbook has an integrated 3G/UMTS modem from Ericsson. Therefore, no need to use the Huawei surf stick at all and of course we use the existing software named Wireless Manager instead of. Now, imagine to mention at the help desk: "Ehm, sorry but what's Mobile Connect?" And Linux after all might give the call operator sleepless nights... Who knows? Anyways, I hope that my article and configuration could give you a helping hand and that you will be able to connect your Linux box with 3G/UMTS surf sticks here in Mauritius.

    Read the article

  • C#/.NET Little Wonders: The ConcurrentDictionary

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In this series of posts, we will discuss how the concurrent collections have been developed to help alleviate these multi-threading concerns.  Last week’s post began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  Today's post discusses the ConcurrentDictionary<T> (originally I had intended to discuss ConcurrentBag this week as well, but ConcurrentDictionary had enough information to create a very full post on its own!).  Finally next week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. Recap As you'll recall from the previous post, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  While these were convenient because you didn't have to worry about writing your own synchronization logic, they were a bit too finely grained and if you needed to perform multiple operations under one lock, the automatic synchronization didn't buy much. With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  This cuts both ways in that you have a lot more control as a developer over when and how fine-grained you want to synchronize, but on the other hand if you just want simple synchronization it creates more work. With .NET 4.0, we get the best of both worlds in generic collections.  A new breed of collections was born called the concurrent collections in the System.Collections.Concurrent namespace.  These amazing collections are fine-tuned to have best overall performance for situations requiring concurrent access.  They are not meant to replace the generic collections, but to simply be an alternative to creating your own locking mechanisms. Among those concurrent collections were the ConcurrentStack<T> and ConcurrentQueue<T> which provide classic LIFO and FIFO collections with a concurrent twist.  As we saw, some of the traditional methods that required calls to be made in a certain order (like checking for not IsEmpty before calling Pop()) were replaced in favor of an umbrella operation that combined both under one lock (like TryPop()). Now, let's take a look at the next in our series of concurrent collections!For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentDictionary – the fully thread-safe dictionary The ConcurrentDictionary<TKey,TValue> is the thread-safe counterpart to the generic Dictionary<TKey, TValue> collection.  Obviously, both are designed for quick – O(1) – lookups of data based on a key.  If you think of algorithms where you need lightning fast lookups of data and don’t care whether the data is maintained in any particular ordering or not, the unsorted dictionaries are generally the best way to go. Note: as a side note, there are sorted implementations of IDictionary, namely SortedDictionary and SortedList which are stored as an ordered tree and a ordered list respectively.  While these are not as fast as the non-sorted dictionaries – they are O(log2 n) – they are a great combination of both speed and ordering -- and still greatly outperform a linear search. Now, once again keep in mind that if all you need to do is load a collection once and then allow multi-threaded reading you do not need any locking.  Examples of this tend to be situations where you load a lookup or translation table once at program start, then keep it in memory for read-only reference.  In such cases locking is completely non-productive. However, most of the time when we need a concurrent dictionary we are interleaving both reads and updates.  This is where the ConcurrentDictionary really shines!  It achieves its thread-safety with no common lock to improve efficiency.  It actually uses a series of locks to provide concurrent updates, and has lockless reads!  This means that the ConcurrentDictionary gets even more efficient the higher the ratio of reads-to-writes you have. ConcurrentDictionary and Dictionary differences For the most part, the ConcurrentDictionary<TKey,TValue> behaves like it’s Dictionary<TKey,TValue> counterpart with a few differences.  Some notable examples of which are: Add() does not exist in the concurrent dictionary. This means you must use TryAdd(), AddOrUpdate(), or GetOrAdd().  It also means that you can’t use a collection initializer with the concurrent dictionary. TryAdd() replaced Add() to attempt atomic, safe adds. Because Add() only succeeds if the item doesn’t already exist, we need an atomic operation to check if the item exists, and if not add it while still under an atomic lock. TryUpdate() was added to attempt atomic, safe updates. If we want to update an item, we must make sure it exists first and that the original value is what we expected it to be.  If all these are true, we can update the item under one atomic step. TryRemove() was added to attempt atomic, safe removes. To safely attempt to remove a value we need to see if the key exists first, this checks for existence and removes under an atomic lock. AddOrUpdate() was added to attempt an thread-safe “upsert”. There are many times where you want to insert into a dictionary if the key doesn’t exist, or update the value if it does.  This allows you to make a thread-safe add-or-update. GetOrAdd() was added to attempt an thread-safe query/insert. Sometimes, you want to query for whether an item exists in the cache, and if it doesn’t insert a starting value for it.  This allows you to get the value if it exists and insert if not. Count, Keys, Values properties take a snapshot of the dictionary. Accessing these properties may interfere with add and update performance and should be used with caution. ToArray() returns a static snapshot of the dictionary. That is, the dictionary is locked, and then copied to an array as a O(n) operation.  GetEnumerator() is thread-safe and efficient, but allows dirty reads. Because reads require no locking, you can safely iterate over the contents of the dictionary.  The only downside is that, depending on timing, you may get dirty reads. Dirty reads during iteration The last point on GetEnumerator() bears some explanation.  Picture a scenario in which you call GetEnumerator() (or iterate using a foreach, etc.) and then, during that iteration the dictionary gets updated.  This may not sound like a big deal, but it can lead to inconsistent results if used incorrectly.  The problem is that items you already iterated over that are updated a split second after don’t show the update, but items that you iterate over that were updated a split second before do show the update.  Thus you may get a combination of items that are “stale” because you iterated before the update, and “fresh” because they were updated after GetEnumerator() but before the iteration reached them. Let’s illustrate with an example, let’s say you load up a concurrent dictionary like this: 1: // load up a dictionary. 2: var dictionary = new ConcurrentDictionary<string, int>(); 3:  4: dictionary["A"] = 1; 5: dictionary["B"] = 2; 6: dictionary["C"] = 3; 7: dictionary["D"] = 4; 8: dictionary["E"] = 5; 9: dictionary["F"] = 6; Then you have one task (using the wonderful TPL!) to iterate using dirty reads: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); And one task to attempt updates in a separate thread (probably): 1: // attempt updates in a separate thread 2: var updateTask = new Task(() => 3: { 4: // iterates, and updates the value by one 5: foreach (var pair in dictionary) 6: { 7: dictionary[pair.Key] = pair.Value + 1; 8: } 9: }); Now that we’ve done this, we can fire up both tasks and wait for them to complete: 1: // start both tasks 2: updateTask.Start(); 3: iterationTask.Start(); 4:  5: // wait for both to complete. 6: Task.WaitAll(updateTask, iterationTask); Now, if I you didn’t know about the dirty reads, you may have expected to see the iteration before the updates (such as A:1, B:2, C:3, D:4, E:5, F:6).  However, because the reads are dirty, we will quite possibly get a combination of some updated, some original.  My own run netted this result: 1: F:6 2: E:6 3: D:5 4: C:4 5: B:3 6: A:2 Note that, of course, iteration is not in order because ConcurrentDictionary, like Dictionary, is unordered.  Also note that both E and F show the value 6.  This is because the output task reached F before the update, but the updates for the rest of the items occurred before their output (probably because console output is very slow, comparatively). If we want to always guarantee that we will get a consistent snapshot to iterate over (that is, at the point we ask for it we see precisely what is in the dictionary and no subsequent updates during iteration), we should iterate over a call to ToArray() instead: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary.ToArray()) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); The atomic Try…() methods As you can imagine TryAdd() and TryRemove() have few surprises.  Both first check the existence of the item to determine if it can be added or removed based on whether or not the key currently exists in the dictionary: 1: // try add attempts an add and returns false if it already exists 2: if (dictionary.TryAdd("G", 7)) 3: Console.WriteLine("G did not exist, now inserted with 7"); 4: else 5: Console.WriteLine("G already existed, insert failed."); TryRemove() also has the virtue of returning the value portion of the removed entry matching the given key: 1: // attempt to remove the value, if it exists it is removed and the original is returned 2: int removedValue; 3: if (dictionary.TryRemove("C", out removedValue)) 4: Console.WriteLine("Removed C and its value was " + removedValue); 5: else 6: Console.WriteLine("C did not exist, remove failed."); Now TryUpdate() is an interesting creature.  You might think from it’s name that TryUpdate() first checks for an item’s existence, and then updates if the item exists, otherwise it returns false.  Well, note quite... It turns out when you call TryUpdate() on a concurrent dictionary, you pass it not only the new value you want it to have, but also the value you expected it to have before the update.  If the item exists in the dictionary, and it has the value you expected, it will update it to the new value atomically and return true.  If the item is not in the dictionary or does not have the value you expected, it is not modified and false is returned. 1: // attempt to update the value, if it exists and if it has the expected original value 2: if (dictionary.TryUpdate("G", 42, 7)) 3: Console.WriteLine("G existed and was 7, now it's 42."); 4: else 5: Console.WriteLine("G either didn't exist, or wasn't 7."); The composite Add methods The ConcurrentDictionary also has composite add methods that can be used to perform updates and gets, with an add if the item is not existing at the time of the update or get. The first of these, AddOrUpdate(), allows you to add a new item to the dictionary if it doesn’t exist, or update the existing item if it does.  For example, let’s say you are creating a dictionary of counts of stock ticker symbols you’ve subscribed to from a market data feed: 1: public sealed class SubscriptionManager 2: { 3: private readonly ConcurrentDictionary<string, int> _subscriptions = new ConcurrentDictionary<string, int>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public void AddSubscription(string tickerKey) 7: { 8: // add a new subscription with count of 1, or update existing count by 1 if exists 9: var resultCount = _subscriptions.AddOrUpdate(tickerKey, 1, (symbol, count) => count + 1); 10:  11: // now check the result to see if we just incremented the count, or inserted first count 12: if (resultCount == 1) 13: { 14: // subscribe to symbol... 15: } 16: } 17: } Notice the update value factory Func delegate.  If the key does not exist in the dictionary, the add value is used (in this case 1 representing the first subscription for this symbol), but if the key already exists, it passes the key and current value to the update delegate which computes the new value to be stored in the dictionary.  The return result of this operation is the value used (in our case: 1 if added, existing value + 1 if updated). Likewise, the GetOrAdd() allows you to attempt to retrieve a value from the dictionary, and if the value does not currently exist in the dictionary it will insert a value.  This can be handy in cases where perhaps you wish to cache data, and thus you would query the cache to see if the item exists, and if it doesn’t you would put the item into the cache for the first time: 1: public sealed class PriceCache 2: { 3: private readonly ConcurrentDictionary<string, double> _cache = new ConcurrentDictionary<string, double>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public double QueryPrice(string tickerKey) 7: { 8: // check for the price in the cache, if it doesn't exist it will call the delegate to create value. 9: return _cache.GetOrAdd(tickerKey, symbol => GetCurrentPrice(symbol)); 10: } 11:  12: private double GetCurrentPrice(string tickerKey) 13: { 14: // do code to calculate actual true price. 15: } 16: } There are other variations of these two methods which vary whether a value is provided or a factory delegate, but otherwise they work much the same. Oddities with the composite Add methods The AddOrUpdate() and GetOrAdd() methods are totally thread-safe, on this you may rely, but they are not atomic.  It is important to note that the methods that use delegates execute those delegates outside of the lock.  This was done intentionally so that a user delegate (of which the ConcurrentDictionary has no control of course) does not take too long and lock out other threads. This is not necessarily an issue, per se, but it is something you must consider in your design.  The main thing to consider is that your delegate may get called to generate an item, but that item may not be the one returned!  Consider this scenario: A calls GetOrAdd and sees that the key does not currently exist, so it calls the delegate.  Now thread B also calls GetOrAdd and also sees that the key does not currently exist, and for whatever reason in this race condition it’s delegate completes first and it adds its new value to the dictionary.  Now A is done and goes to get the lock, and now sees that the item now exists.  In this case even though it called the delegate to create the item, it will pitch it because an item arrived between the time it attempted to create one and it attempted to add it. Let’s illustrate, assume this totally contrived example program which has a dictionary of char to int.  And in this dictionary we want to store a char and it’s ordinal (that is, A = 1, B = 2, etc).  So for our value generator, we will simply increment the previous value in a thread-safe way (perhaps using Interlocked): 1: public static class Program 2: { 3: private static int _nextNumber = 0; 4:  5: // the holder of the char to ordinal 6: private static ConcurrentDictionary<char, int> _dictionary 7: = new ConcurrentDictionary<char, int>(); 8:  9: // get the next id value 10: public static int NextId 11: { 12: get { return Interlocked.Increment(ref _nextNumber); } 13: } Then, we add a method that will perform our insert: 1: public static void Inserter() 2: { 3: for (int i = 0; i < 26; i++) 4: { 5: _dictionary.GetOrAdd((char)('A' + i), key => NextId); 6: } 7: } Finally, we run our test by starting two tasks to do this work and get the results… 1: public static void Main() 2: { 3: // 3 tasks attempting to get/insert 4: var tasks = new List<Task> 5: { 6: new Task(Inserter), 7: new Task(Inserter) 8: }; 9:  10: tasks.ForEach(t => t.Start()); 11: Task.WaitAll(tasks.ToArray()); 12:  13: foreach (var pair in _dictionary.OrderBy(p => p.Key)) 14: { 15: Console.WriteLine(pair.Key + ":" + pair.Value); 16: } 17: } If you run this with only one task, you get the expected A:1, B:2, ..., Z:26.  But running this in parallel you will get something a bit more complex.  My run netted these results: 1: A:1 2: B:3 3: C:4 4: D:5 5: E:6 6: F:7 7: G:8 8: H:9 9: I:10 10: J:11 11: K:12 12: L:13 13: M:14 14: N:15 15: O:16 16: P:17 17: Q:18 18: R:19 19: S:20 20: T:21 21: U:22 22: V:23 23: W:24 24: X:25 25: Y:26 26: Z:27 Notice that B is 3?  This is most likely because both threads attempted to call GetOrAdd() at roughly the same time and both saw that B did not exist, thus they both called the generator and one thread got back 2 and the other got back 3.  However, only one of those threads can get the lock at a time for the actual insert, and thus the one that generated the 3 won and the 3 was inserted and the 2 got discarded.  This is why on these methods your factory delegates should be careful not to have any logic that would be unsafe if the value they generate will be pitched in favor of another item generated at roughly the same time.  As such, it is probably a good idea to keep those generators as stateless as possible. Summary The ConcurrentDictionary is a very efficient and thread-safe version of the Dictionary generic collection.  It has all the benefits of type-safety that it’s generic collection counterpart does, and in addition is extremely efficient especially when there are more reads than writes concurrently. Tweet Technorati Tags: C#, .NET, Concurrent Collections, Collections, Little Wonders, Black Rabbit Coder,James Michael Hare

    Read the article

  • 6 Prominent Features of New GMail User Interface

    - by Gopinath
    GMail’s user interface has got a big make over today and the new user interface is available to everyone. We can switch to the new user interface by click on “Switch to the new look” link available at the bottom right of GMail (If you are on IE 6 or similar type of bad browsers, you will not see the option!). I switched to the new user interface as soon I noticed the link and played with it for sometime. In this post I want to share the prominent features of all new GMail interface. 1. All New Conversations Interface GMail’s threaded conversations is a game changing feature when it was first introduced by Google. For  a long time we have not seen much updates to the threaded conversation views. In the new GMail interface, threaded conversation sports a great new look – conversations are always visible in a horizontal fashion as opposed to stack interface of earlier version. When you open a conversation, you get a quick glance of individual thread without expanding the thread. Readability is improved a lot now.  Check image after the break 2. Sender Profile Photos In Email Threads Did you observe the above screenshot of conversations view? It has profile images of the participants in the thread. Identifying person of a thread is much more easy. 3. Advanced Search Box Search is the heart of Google’s business and it’s their flagship technology. GMail’s search interface is enhanced to let you quickly find the required e-mails. Also you can create mail filters from the search box without leaving the screen or opening up a new popup. 4. Gmail Automatically Resizing To Fit Multiple Devices There is no doubt that this is post PC era where people started using more of tablets and big screen smartphones than ever. The new user interface of GMail automatically resizes itself to fit the size of screen seamlessly. 5. HD Images For Your Themes, Sourced from iStockphoto Are you bored with minimalistic GMail interface and the few flashy themes? Here comes GMail HD themes backed by stock photographs sourced from iStockPhoto website. If you have a widescreen HD monitor then decorate your inbox with beautiful themes. 6. Resize Labels & Chat Panels Now you got a splitter between Labels & Chat panel that lets resize their height as you prefer. Also Label panel auto expands its height when you mouse over to show you hidden labels if any. Video – overview of new GMail features This article titled,6 Prominent Features of New GMail User Interface, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How the "migrations" approach makes database continuous integration possible

    - by David Atkinson
    Testing a database upgrade script as part of a continuous integration process will only work if there is an easy way to automate the generation of the upgrade scripts. There are two common approaches to managing upgrade scripts. The first is to maintain a set of scripts as-you-go-along. Many SQL developers I've encountered will store these in a folder prefixed numerically to ensure they are ordered as they are intended to be run. Occasionally there is an accompanying document or a batch file that ensures that the scripts are run in the defined order. Writing these scripts during the course of development requires discipline. It's all too easy to load up the table designer and to make a change directly to the development database, rather than to save off the ALTER statement that is required when the same change is made to production. This discipline can add considerable overhead to the development process. However, come the end of the project, everything is ready for final testing and deployment. The second development paradigm is to not do the above. Changes are made to the development database without considering the incremental update scripts required to effect the changes. At the end of the project, the SQL developer or DBA, is tasked to work out what changes have been made, and to hand-craft the upgrade scripts retrospectively. The end of the project is the wrong time to be doing this, as the pressure is mounting to ship the product. And where data deployment is involved, it is prudent not to feel rushed. Schema comparison tools such as SQL Compare have made this latter technique more bearable. These tools work by analyzing the before and after states of a database schema, and calculating the SQL required to transition the database. Problem solved? Not entirely. Schema comparison tools are huge time savers, but they have their limitations. There are certain changes that can be made to a database that can't be determined purely from observing the static schema states. If a column is split, how do we determine the algorithm required to copy the data into the new columns? If a NOT NULL column is added without a default, how do we populate the new field for existing records in the target? If we rename a table, how do we know we've done a rename, as we could equally have dropped a table and created a new one? All the above are examples of situations where developer intent is required to supplement the script generation engine. SQL Source Control 3 and SQL Compare 10 introduced a new feature, migration scripts, allowing developers to add custom scripts to replace the default script generation behavior. These scripts are committed to source control alongside the schema changes, and are associated with one or more changesets. Before this capability was introduced, any schema change that required additional developer intent would break any attempt at auto-generation of the upgrade script, rendering deployment testing as part of continuous integration useless. SQL Compare will now generate upgrade scripts not only using its diffing engine, but also using the knowledge supplied by developers in the guise of migration scripts. In future posts I will describe the necessary command line syntax to leverage this feature as part of an automated build process such as continuous integration.

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris Stevens
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

  • How to inhibit suspend temporarily?

    - by Zorn
    I have searched around a bit for this and can't seem to find anything helpful. I have my PC running Ubuntu 12.10 set up to suspend after 30 minutes of inactivity. I don't want to change that, it works great most of the time. What I do want to do is disable the automatic suspend if a particular application is running. How can I do this? The closest thing I've found so far is to add a shell script in /usr/lib/pm-utils/sleep.d which checks if the application is running and returns 1 to indicate that suspend should be prevented. But it looks like the system then gives up on suspending automatically, instead of trying again after another 30 minutes. (As far as I can tell, if I move the mouse, that restarts the timer again.) It's quite likely the application will finish after a couple of hours, and I'd rather my PC then suspended automatically if I'm not using it at that point. (So I don't want to add a call to pm-suspend when the application finishes.) Is this possible? Any advice would be appreciated. Cheers. EDIT: As I noted in one of the comments below, what I actually wanted was to inhibit suspend when my PC was serving files over NFS; I just wanted to focus on the "suspend" part of the question because I already had an idea how to solve the NFS part. Using the 'xdotool' idea given in one of the answers, I have come up with the following script which I run from cron every few minutes. It's not ideal because it stops the screensaver kicking in as well, but it does work. I need to have a look at why 'caffeine' doesn't correctly re-enable suspend later on, then I could probably do better. Anyway, this does seem to work, so I'm including it here in case anyone else is interested. #!/bin/bash # If the output of this function changes between two successive runs of this # script, we inhibit auto-suspend. function check_activity() { /usr/sbin/nfsstat --server --list } # Prevent the automatic suspend from kicking in. function inhibit_suspend() { # Slightly jiggle the mouse pointer about; we do a small step and # reverse step to try to stop this being annoying to anyone using the # PC. TODO: This isn't ideal, apart from being a bit hacky it stops # the screensaver kicking in as well, when all we want is to stop # the PC suspending. Can 'caffeine' help? export DISPLAY=:0.0 xdotool mousemove_relative --sync -- 1 1 xdotool mousemove_relative --sync -- -1 -1 } LOG="$HOME/log/nfs-suspend-blocker.log" ACTIVITYFILE1="$HOME/tmp/nfs-suspend-blocker.current" ACTIVITYFILE2="$HOME/tmp/nfs-suspend-blocker.previous" echo "Started run at $(date)" >> "$LOG" if [ ! -f "$ACTIVITYFILE1" ]; then check_activity > "$ACTIVITYFILE1" exit 0; fi /bin/mv "$ACTIVITYFILE1" "$ACTIVITYFILE2" check_activity > "$ACTIVITYFILE1" if cmp --quiet "$ACTIVITYFILE1" "$ACTIVITYFILE2"; then echo "No activity detected since last run" >> "$LOG" else echo "Activity detected since last run; inhibiting suspend" >> "$LOG" inhibit_suspend fi

    Read the article

  • SharePoint 2010 Field Expression Builder

    - by Ricardo Peres
    OK, back to two of my favorite topics, expression builders and SharePoint. This time I wanted to be able to retrieve a field value from the current page declaratively on the markup so that I can assign it to some control’s property, without the need for writing code. Of course, the most straight way to do it is through an expression builder. Here’s the code: 1: [ExpressionPrefix("SPField")] 2: public class SPFieldExpressionBuilder : ExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetFieldValue(String fieldName, PropertyInfo propertyInfo) 6: { 7: Object fieldValue = SPContext.Current.ListItem[fieldName]; 8:  9: if (fieldValue != null) 10: { 11: if ((fieldValue is IConvertible) && (typeof(IConvertible).IsAssignableFrom(propertyInfo.PropertyType) == true)) 12: { 13: if (propertyInfo.PropertyType.IsAssignableFrom(fieldValue.GetType()) != true) 14: { 15: fieldValue = Convert.ChangeType(fieldValue, propertyInfo.PropertyType); 16: } 17: } 18: } 19:  20: return (fieldValue); 21: } 22:  23: #endregion 24:  25: #region Public override methods 26: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 27: { 28: return (GetFieldValue(entry.Expression, entry.PropertyInfo)); 29: } 30:  31: public override CodeExpression GetCodeExpression(BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 32: { 33: if (String.IsNullOrEmpty(entry.Expression) == true) 34: { 35: return (new CodePrimitiveExpression(String.Empty)); 36: } 37: else 38: { 39: return (new CodeMethodInvokeExpression(new CodeMethodReferenceExpression(new CodeTypeReferenceExpression(this.GetType()), "GetFieldValue"), new CodePrimitiveExpression(entry.Expression), new CodePropertyReferenceExpression(new CodeArgumentReferenceExpression("entry"), "PropertyInfo"))); 40: } 41: } 42:  43: #endregion 44:  45: #region Public override properties 46: public override Boolean SupportsEvaluate 47: { 48: get 49: { 50: return (true); 51: } 52: } 53: #endregion 54: } You will notice that it will even try to convert the field value to the target property’s type, through the use of the IConvertible interface and the Convert.ChangeType method. It must be placed on the Global Assembly Cache or you will get a security-related exception. The other alternative is to change the trust level of your web application to full trust. Here’s how to register it on Web.config: 1: <expressionBuilders> 2: <!-- ... --> 3: <add expressionPrefix="SPField" type="MyNamespace.SPFieldExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=29186a6b9e7b779f" /> 4: </expressionBuilders> And finally, here’s how to use it on an ASPX or ASCX file inside a publishing page: 1: <asp:Label runat="server" Text="<%$ SPField:Title %>"/>

    Read the article

  • F# and the rose-tinted reflection

    - by CliveT
    We're already seeing increasing use of many cores on client desktops. It is a change that has been long predicted. It is not just a change in architecture, but our notions of efficiency in a program. No longer can we focus on the asymptotic complexity of an algorithm by counting the steps that a single core processor would take to execute it. Instead we'll soon be more concerned about the scalability of the algorithm and how well we can increase the performance as we increase the number of cores. This may even lead us to throw away our most efficient algorithms, and switch to less efficient algorithms that scale better. We might even be willing to waste cycles in order to speculatively execute at the algorithm rather than the hardware level. State is the big headache in this parallel world. At the hardware level, main memory doesn't necessarily contain the definitive value corresponding to a particular address. An update to a location might still be held in a CPU's local cache and it might be some time before the value gets propagated. To get the latest value, and the notion of "latest" takes a lot of defining in this world of rapidly mutating state, the CPUs may well need to communicate to decide who has the definitive value of a particular address in order to avoid lost updates. At the user program level, this means programmers will need to lock objects before modifying them, or attempt to avoid the overhead of locking by understanding the memory models at a very deep level. I think it's this need to avoid statefulness that has led to the recent resurgence of interest in functional languages. In the 1980s, functional languages started getting traction when research was carried out into how programs in such languages could be auto-parallelised. Sadly, the impracticality of some of the languages, the overheads of communication during this parallel execution, and rapid improvements in compiler technology on stock hardware meant that the functional languages fell by the wayside. The one thing that these languages were good at was getting rid of implicit state, and this single idea seems like a solution to the problems we are going to face in the coming years. Whether these languages will catch on is hard to predict. The mindset for writing a program in a functional language is really very different from the way that object-oriented problem decomposition happens - one has to focus on the verbs instead of the nouns, which takes some getting used to. There are a number of hybrid functional/object languages that have been becoming more popular in recent times. These half-way houses make it easy to use functional ideas for some parts of the program while still allowing access to the underlying object-focused platform without a great deal of impedance mismatch. One example is F# running on the CLR which, in Visual Studio 2010, has because a first class member of the pack. Inside Visual Studio 2010, the tooling for F# has improved to the point where it is easy to set breakpoints and watch values change while debugging at the source level. In my opinion, it is the tooling support that will enable the widespread adoption of functional languages - without this support, people will put off any transition into the functional world for as long as they possibly can. Without tool support it will make it hard to learn these languages. One tool that doesn't currently support F# is Reflector. The idea of decompiling IL to a functional language is daunting, but F# is potentially so important I couldn't dismiss the idea. As I'm currently developing Reflector 6.5, I thought it wise to take four days just to see how far I could get in doing so, even if it achieved little more than to be clearer on how much was possible, and how long it might take. You can read what happened here, and of the insights it gave us on ways to improve the tool.

    Read the article

  • Oracle Social Network and the Flying Monkey Smart Target

    - by kellsey.ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. I teased this before OpenWorld, and for those of you who didn’t make it to the show or didn’t come by the Office Hours to take the Oracle Social Network Technical Tour Noel (@noelportugal) ran, I give you the Flying Monkey Smart Target. In brief, Noel built a target, about two feet tall, which when struck, played monkey sounds and posted a comment to an Oracle Social Network Conversation, all controlled by a Raspberry Pi. He also connected a Dropcam to record the winner just prior to the strike. I’m not sure how it all works, but maybe Noel can post the technical specifics. Here’s Noel describing the Challenge, the Target and a few other tidbit in an interview with Friend of the ‘Lab, Bob Rhubart (@brhubart). The monkey target bits are 2:12-2:54 if you’re into brevity, but watch the whole thing. Here are some screen grabs from the Oracle Social Network Conversation, including the Conversation itself, where you can see all the strikes documented, the picture captured, and the annotation capabilities: #gallery-1 { margin: auto;? } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; }    That’s Diego in one shot, looking very focused, and Ernst in the other, who kindly annotated himself, two of the development team members. You might have seen them in the Oracle Social Network Hands-On Lab during the show. There’s a trend here. Not by accident, fun stuff like this has becoming our calling card, e.g. the Kscope 12 WebCenter Rock ‘em Sock ‘em Robots. Not only are these entertaining demonstrations, but they showcase what’s possible with RESTful APIs and get developers noodling on how easy it is to connect real objects to cloud services to fix pain points. I spoke to some great folks from the City of Atlanta about extending the concepts of the flying monkey target to physical asset monitoring. Just take an internet-connected camera with REST APIs like the Dropcam, wire it up to Oracle Social Netwok, and you can hack together a monitoring device for a datacenter or a warehouse. Sure, it’s easier said than done, but we’re a lot closer to that reality than we were even two years ago. Another noteworthy bit from Noel’s interview, beginning at 2:55, is the evolution of social developer. Speaking of, make sure to check out the Oracle Social Developer Community. Look for more on the social developer in the coming months. Noel has become quite the Raspberry Pi evangelist, and why not, it’s a great tool, a low-power Linux machine, cheap ($35!) and highly extensible, perfect for makers and students alike. He attended a meetup on Saturday before OpenWorld, and during the show, I heard him evangelizing the Pi and its capabilities to many people. There is some fantastic innovation forming in that ecosystem, much of it with Java. The OTN gang raffled off five Pis, and I expect to see lots of great stuff in the very near future. Stay tuned this week for posts on all our Challenge entrants. There’s some great innovation you won’t want to miss. Find the comments. Update: I forgot to mention that Noel used Twilio, one of his favorite services, during the show to send out Challenge updates and information to all the contestants.

    Read the article

< Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >