Search Results

Search found 13652 results on 547 pages for 'full'.

Page 458/547 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • Integration Patterns with Azure Service Bus Relay, Part 3.5: Node.js relay

    - by Elton Stoneman
    This is an extension to Part 3 in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer In Part 3 I said “there isn't actually a .NET requirement here”, and this post just follows up on that statement. In Part 3 we had an ASP.NET MVC Website making a REST call to an Azure Service Bus service; to show that the REST stuff is really interoperable, in this version we use Node.js to make the secure service call. The code is on GitHub here: IPASBR Part 3.5. The sample code is simpler than Part 3 - rather than code up a UI in Node.js, the sample just relays the REST service call out to Azure. The steps are the same as Part 3: REST call to ACS with the service identity credentials, which returns an SWT; REST call to Azure Service Bus Relay, presenting the SWT; request gets relayed to the on-premise service. In Node.js the authentication step looks like this: var options = { host: acs.namespace() + '-sb.accesscontrol.windows.net', path: '/WRAPv0.9/', method: 'POST' }; var values = { wrap_name: acs.issuerName(), wrap_password: acs.issuerSecret(), wrap_scope: 'http://' + acs.namespace() + '.servicebus.windows.net/' }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); res.on('data', function (d) { var token = qs.parse(d.toString('utf8')); callback(token.wrap_access_token); }); }); req.write(qs.stringify(values)); req.end(); Once we have the token, we can wrap it up into an Authorization header and pass it to the Service Bus call: token = 'WRAP access_token=\"' + swt + '\"'; //... var reqHeaders = { Authorization: token }; var options = { host: acs.namespace() + '.servicebus.windows.net', path: '/rest/reverse?string=' + requestUrl.query.string, headers: reqHeaders }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); response.writeHead(res.statusCode, res.headers); res.on('data', function (d) { var reversed = d.toString('utf8') console.log('svc returned: ' + d.toString('utf8')); response.end(reversed); }); }); req.end(); Running the sample Usual routine to add your own Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files. Build and you should be able to navigate to the on-premise service at http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 and get a string response, going to the service direct. Install Node.js (v0.8.14 at time of writing), run FormatServiceRelay.cmd, navigate to http://localhost:8013/reverse?string=abc123, and you should get exactly the same response but through Node.js, via Azure Service Bus Relay to your on-premise service. The console logs the WRAP token returned from ACS and the response from Azure Service Bus Relay which it forwards:

    Read the article

  • Create Panoramic Photos with Windows Live Photo Gallery

    - by Matthew Guay
    Have you ever wanted to capture the view from a mountain or the full size of a building?  Here’s how you can stitch multiple shots together into the perfect panoramic picture for free with Windows Live Photo Gallery. Getting Started First, make sure you have Windows Live Photo Gallery installed (link below).  Live Photo Gallery is part of the Windows Live Essentials suite, you can select other programs to install along with it if you want. Make sure to uncheck setting your home page to MSN and setting your search provider as Bing if you don’t want them changed.   Now, make sure you have pictures that will work good for a panorama.  These need to be taken from the same spot, and the edges of the pictures need to overlap so the program can find where the pictures connect.  Here we have taken pictures inside a building with a cell phone camera. Make your Panorama Open Live Photo Gallery, and find the pictures you want to use in your panorama.  It will automatically index and display all of the photos in your Pictures folder or Library if you’re using Windows 7. If your pictures are saved elsewhere, add that folder to Photo Gallery.  Click File, Include a folder in the gallery, and select the correct folder at the prompt. Now select all of the pictures that you will use in your panorama.  You can easily do this by clicking the checkbox on each picture that appears when you hover over it.    Once all of the pictures are selected, click Make in the menu bar and select Create panoramic photo… Alternately, right-click on any of the pictures you’ve selected, and click Create panoramic photo… Live Photo Gallery will analyze your photos and compost them together to create a panorama.  The amount of time it takes will vary depending on the number of photos, size of the pictures, and computer speed. When it’s finished making the panorama, you’ll be prompted to enter a file name and save the picture. Your new panorama picture will open as soon as it’s saved.  Depending on your shots, the picture may have quite a bit of black space around the edges where each picture didn’t cover the exact same amount of area. To correct this, click Fix on the menu bar, and then select Crop Photo in the sidebar that opens. Select the center of the picture with the crop tool, and click Apply when you’ve got the selection you want. Live Photo Gallery automatically saves your picture changes, and you can revert back to the original picture if you wish. Now you’ve got a nice panoramic shot, trimmed and ready to print, share, and more. Conclusion Panoramic shots are great ways to capture your whole surroundings, whether it’s a sports stadium, mall, or a scenic mountain view.  They can also be a great way to capture more with low-resolution cameras. Link Download Windows Live Photo Gallery Similar Articles Productive Geek Tips Family Fun: Share Photos with Photo Gallery and Windows Live SpacesLearning Windows 7: Manage Photos with Live Photo GalleryEasily Re-Size Photos in Windows Vista or XPInstall Windows Live Essentials In Windows 7Convert Photos to Flash for Your Website TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • Networking- Wireless and Wired not working

    - by JJ White
    So everything in Ubuntu has been working great until networking stopped working. I've spent the better part of two days scouring for a fix with no luck. Here is my info...your help would so be appreciated. grep -i eth /var/log/syslog | tail Sep 25 16:31:59 jj-laptop NetworkManager[970]: <info> (eth0): cleaning up... Sep 25 16:31:59 jj-laptop NetworkManager[970]: <info> (eth0): taking down device. Sep 25 16:31:59 jj-laptop kernel: [23403.998837] sky2 0000:02:00.0: eth0: disabling interface Sep 25 16:35:54 jj-laptop NetworkManager[970]: <info> (eth0): now managed Sep 25 16:35:54 jj-laptop NetworkManager[970]: <info> (eth0): device state change: unmanaged -> unavailable (reason 'managed') [10 20 2] Sep 25 16:35:54 jj-laptop NetworkManager[970]: <info> (eth0): bringing up device. Sep 25 16:35:54 jj-laptop kernel: [23413.629424] sky2 0000:02:00.0: eth0: enabling interface Sep 25 16:35:54 jj-laptop kernel: [23413.635703] ADDRCONF(NETDEV_UP): eth0: link is not ready Sep 25 16:35:54 jj-laptop NetworkManager[970]: <info> (eth0): preparing device. Sep 25 16:35:54 jj-laptop NetworkManager[970]: <info> (eth0): deactivating device (reason 'managed') [2] and then ifconfig -a eth0 Link encap:Ethernet HWaddr 00:17:42:14:e9:e1 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:4605 errors:0 dropped:0 overruns:0 frame:0 TX packets:287 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:315161 (315.1 KB) TX bytes:63680 (63.6 KB) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:13018 errors:0 dropped:0 overruns:0 frame:0 TX packets:13018 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:880484 (880.4 KB) TX bytes:880484 (880.4 KB) wlan0 Link encap:Ethernet HWaddr 00:13:02:d0:ee:13 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) also iwconfig lo no wireless extensions. wlan0 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on eth0 no wireless extensions. and of course sudo lshw -C network *-network description: Ethernet interface product: 88E8055 PCI-E Gigabit Ethernet Controller vendor: Marvell Technology Group Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 12 serial: 00:17:42:14:e9:e1 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=sky2 driverversion=1.30 duplex=full firmware=N/A latency=0 link=no multicast=yes port=twisted pair speed=1Gbit/s resources: irq:44 memory:f0000000-f0003fff ioport:2000(size=256) memory:c0a00000-c0a1ffff *-network DISABLED description: Wireless interface product: PRO/Wireless 3945ABG [Golan] Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: wlan0 version: 02 serial: 00:13:02:d0:ee:13 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwl3945 driverversion=3.2.0-31-generic firmware=15.32.2.9 latency=0 link=no multicast=yes wireless=IEEE 802.11abg resources: irq:45 memory:c0100000-c0100fff last but not least lsmod Module Size Used by nls_iso8859_1 12617 0 nls_cp437 12751 0 vfat 17308 0 fat 55605 1 vfat usb_storage 39646 0 uas 17828 0 dm_crypt 22528 0 rfcomm 38139 0 parport_pc 32114 0 bnep 17830 2 ppdev 12849 0 bluetooth 158438 10 rfcomm,bnep binfmt_misc 17292 1 snd_hda_codec_idt 60251 1 pcmcia 39791 0 joydev 17393 0 snd_hda_intel 32765 3 snd_hda_codec 109562 2 snd_hda_codec_idt,snd_hda_intel snd_hwdep 13276 1 snd_hda_codec snd_pcm 80845 2 snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 yenta_socket 27428 0 snd_rawmidi 25424 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51567 2 snd_seq_midi,snd_seq_midi_event pcmcia_rsrc 18367 1 yenta_socket wacom_w8001 12906 0 irda 185517 0 arc4 12473 2 iwl3945 73111 0 iwl_legacy 71134 1 iwl3945 mac80211 436455 2 iwl3945,iwl_legacy cfg80211 178679 3 iwl3945,iwl_legacy,mac80211 snd_timer 28931 2 snd_pcm,snd_seq snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq dm_multipath 22710 0 pcmcia_core 21511 3 pcmcia,yenta_socket,pcmcia_rsrc psmouse 96619 0 apanel 12718 0 serport 12808 1 serio_raw 13027 0 crc_ccitt 12595 1 irda mac_hid 13077 0 snd 62064 15 snd_hda_codec_idt,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_t imer,snd_seq_device input_polldev 13648 1 apanel soundcore 14635 1 snd snd_page_alloc 14115 2 snd_hda_intel,snd_pcm lp 17455 0 fujitsu_laptop 18504 0 parport 40930 3 parport_pc,ppdev,lp dm_raid45 76451 0 xor 25987 1 dm_raid45 dm_mirror 21822 0 dm_region_hash 16065 1 dm_mirror dm_log 18193 3 dm_raid45,dm_mirror,dm_region_hash sdhci_pci 18324 0 sdhci 28241 1 sdhci_pci sky2 49545 0 i915 414939 3 drm_kms_helper 45466 1 i915 drm 197692 4 i915,drm_kms_helper i2c_algo_bit 13199 1 i915 video 19068 1 i915

    Read the article

  • Using the RSSBus Salesforce Excel Add-In From Excel Macros (VBA)

    - by dataintegration
    The RSSBus Salesforce Excel Add-In makes it easy to retrieve and update data from Salesforce from within Microsoft Excel. In addition to the built-in wizards that make data manipulation possible without code, the full functionality of the RSSBus Excel Add-Ins is available programmatically with Excel Macros (VBA) and Excel Functions. This article shows how to write an Excel macro that can be used to perform bulk inserts into Salesforce. Although this article uses the Salesforce Excel Add-In as an example, the same process can be applied to any of the Excel Add-Ins available on our website. Step 1: Download and install the RSSBus Excel Add-In available on our website. Step 2: Open Excel and create place holder cells for the connection details that are needed from the macro. In this article, a spreadsheet will be created for batch inserts, and these cells will store the connection details, and will be used to report the job Id, the batch Id, and the batch status. Step 3: Switch to the Developer tab in Excel. Add a new button on the spreadsheet, and create a new macro associated with it. This macro will contain the code needed to insert a batch of rows into Salesforce. Step 4: Add a reference to the Excel Add-In by selecting Tools --> References --> RSSBus Excel Add-In. The macro functions of the Excel Add-In will be available once the reference has been added. The following code shows how to call a Stored Procedure. In this example, a job is created to insert Leads by calling the CreateJob stored procedure. CreateJob returns a jobId that can be used to upload a large number of Leads in one transaction. Note the use of cells B1, B2, B3, and B4 that were created in Step 2 to read the connection settings from the Excel SpreadSheet and to write out the status of the procedure. methodName = "CreateJob" module.SetProviderName ("Salesforce") nameArray = Array("ObjectName", "Action", "ConcurrencyMode") valueArray = Array("Lead", "insert", "Serial") user = Range("B1").value pass = Range("B2").value atoken = Range("B3").value If (Not user = "" And Not pass = "" And Not atoken = "") Then module.SetConnectionString ("User=" + user + ";Password=" + pass + ";Access Token=" + atoken + ";") If module.CallSP(methodName, nameArray, valueArray) Then Dim ColumnCount As Integer ColumnCount = module.GetColumnCount Dim idIndex As Integer For Count = 0 To ColumnCount - 1 Dim colName As String colName = module.GetColumnName(Count) If module.GetColumnName(Count) = "id" Then idIndex = Count End If Next While (Not module.EOF) Range("B4").value = module.GetValue(idIndex) module.MoveNext Wend Else MsgBox "The CreateJob query failed." End If Exit Sub Else MsgBox "Please specify the connection details." Exit Sub End If Error: MsgBox "ERROR: " & Err.Description Step 5: Add the code to your macro. If you use the code above, you can check the results at Salesforce.com. They can be seen at Administration Setup -> Monitoring -> Bulk Data Load Jobs. Download the attached sample file for a more complete demo. Distributing an Excel File With Macros An Excel file with macros is saved using the .xlms extension. The code for the macro remains in the Excel file, and you can distribute your Excel file to any machine where the RSSBus Salesforce Excel Add-In is already installed. Macro Sample File Please download the fully functional sample excel file that includes the code referenced here. You will also need the RSSBus Excel Add-In to make the connection. You can download a free trial here. Note: You may get an error message stating: "Can't find project or library." in Excel 2007, since this example is made using Excel 2010. To resolve this, navigate to Tools -> References and uncheck the "MISSING: RSSBus Excel Add-In", then scroll down and check the "RSSBus Excel Add-In" listed below it.

    Read the article

  • Bluetooth not found on BCM43228

    - by TK Kocheran
    I've got a Broadcom BCM43228 mPCIe card which came with my motherboard (ASUS ROG Maximus V Extreme, can't seem to find a link to what the card is) which is working great for WiFi right now, but I can't detect the Bluetooth hardware onboard. In Windows, I have full Bluetooth 4.0 support. $ lspci 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:14.0 USB controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 04) 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.4 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 5 (rev c4) 00:1c.6 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 7 (rev c4) 00:1c.7 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 8 (rev c4) 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1189 (rev a1) 01:00.1 Audio device: NVIDIA Corporation Device 0e0a (rev a1) 0d:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller 0e:00.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:01.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:04.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:05.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:06.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:07.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:08.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 0f:09.0 PCI bridge: PLX Technology, Inc. PEX 8608 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch (rev ba) 10:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller 12:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01) 15:00.0 Network controller: Broadcom Corporation BCM43228 802.11a/b/g/n 17:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01) The key line seems to be: 15:00.0 Network controller: Broadcom Corporation BCM43228 802.11a/b/g/n If I try to detect the Bluetooth card, I don't see anything: $ hcitool dev Devices: rfkill list all: Output lspci: Output lsusb: Output I finally found the card with usb-devices: T: Bus=01 Lev=02 Prnt=02 Port=00 Cnt=01 Dev#= 3 Spd=12 MxCh= 0 D: Ver= 2.00 Cls=ff(vend.) Sub=01 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=0b05 ProdID=17b5 Rev=01.12 S: Manufacturer=Broadcom Corp S: Product=BCM20702A0 S: SerialNumber=############ C: #Ifs= 4 Cfg#= 1 Atr=e0 MxPwr=0mA I: If#= 0 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=01 Prot=01 Driver=(none) I: If#= 1 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=01 Prot=01 Driver=(none) I: If#= 2 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none) I: If#= 3 Alt= 0 #EPs= 0 Cls=fe(app. ) Sub=01 Prot=01 Driver=(none) I've heard that this card needs to have firmware injected into it in order to function. If that's the case, how do I do it?

    Read the article

  • Windows Azure Mobile Services Updates Keep Coming

    - by Clint Edmonson
    Some exciting new Windows Azure Mobile Services features were delivered to production this week. The highlights include: iPhone and iPad connectivity support via a new iOS SDK Integrated Authentication so developers can configure user authentication via Microsoft Account, Facebook, Twitter, and Google. New server-side Mobile Service script modules Access to Structured Storage, Windows Azure Blob, Table, Queues, and ServiceBus Email services through partnership with SendGrid SMS & voice services through partnership with Twilio Mobile Services hosting expanded to west coast US The iOS SDK I’m excited to share that we've announced the release of an under-development iOS client SDK for Windows Azure Mobile Services. The iOS SDK joins the Windows 8 SDK launched with Windows Azure Mobile Services as well as client SDKs released by Xamarin for MonoTouch and MonoDroid.  The native iOS SDK is for developers programming in Objective-C on the iPhone and iPad platforms. The SDK gives developers the same level of access to data storage using dynamic schematization that is available for Windows 8. Also, iOS applications can use the same authentication options available in Mobile Services. While full iOS support is still in development, the libraries are currently available on GitHub. There’s a great getting started tutorial to walk you through building a simple iOS “Todo List” app that stores data in Windows Azure.  These additional tutorials explore how to use the iOS client libraries to store data and authenticate users: Get Started with data in Mobile Services for iOS Get Started with authentication in Mobile Services for iOS What’s New in Authentication Available to both iOS and Windows 8 developers, Mobile Services has expanded its authentication options.  Developers can now use Microsoft, Facebook, Twitter, and Google authentication. Similar to using Microsoft accounts for authentication, developers must sign up and through Facebook, Twitter, or Google's developer portal in order to authenticate through them.  These tutorials walk through how to register your Mobile Service with an identity provider: How to register your app with Microsoft Account How to register your app with Facebook How to register your app with Twitter How to register your app with Google And these tutorials walk through authenticating against Mobile Services: Get started with authentication in Mobile Services for Windows Store (C#) Get started with authentication in Mobile Services for Windows Store (JavaScript) Get started with authentication in Mobile Services for iOS What’s New in Mobile Service Scripts Some great new functionality is now available in the Mobile Service script layer.  These server side scripts are triggered off of any CRUD operation on a Mobile Service's table and can already handle doing data and query validation, filtering, web requests and more.  Today, the Azure SDK module is now available to these scripts giving them access to blob storage, service bus, table storage.  Check out the new tutorials on the Windows Azure Node.js developer center to learn more about working with Blob, Tables, Queues and Service Bus using the azure module. In addition, SendGrid and Twilio are now available via modules that can be called from the scripts as well.  This gives developers the ability to send emails (SendGrid) or SMS text messages (Twilio) whenever a script is fired.  Windows Azure customers receive a special offer of 25,000 free emails per month from SendGrid and 1000 free text messages from Twilio. Expanded Data Center Availability In addition to Mobile Services being available in our US East data center, they can now be spun up in US West. The above features are all now live in production and are available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using Mobile Services today. The Windows Azure Mobile Developer Center has been updated with new tutorials that cover these new features in detail. And don’t forget - Windows Azure Mobile Services are still free for your first ten applications running on shared compute instances. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted

    Read the article

  • Entity Framework 4, WCF &amp; Lazy Loading Tip

    - by Dane Morgridge
    If you are doing any work with Entity Framework and custom WCF services in EFv1, everything works great.  As soon as you jump to EFv4, you may find yourself getting odd errors that you can’t seem to catch.  The problem is almost always has something to do with the new lazy loading feature in Entity Framework 4.  With Entity Framework 1, you didn’t have lazy loading so this problem didn’t surface.  Assume I have a Person entity and an Address entity where there is a one-to-many relationship between Person and Address (Person has many Addresses). In Entity Framework 1 (or in EFv4 with lazy loading turned off), I would have to load the Address data by hand by either using the Include or Load Method: var people = context.People.Include("Addresses"); or people.Addresses.Load(); Lazy loading works when the first time the Person.Addresses collection is accessed: 1: var people = context.People.ToList(); 2:  3: // only person data is currently in memory 4:  5: foreach(var person in people) 6: { 7: // EF determines that no Address data has been loaded and lazy loads 8: int count = person.Addresses.Count(); 9: } 10:  Lazy loading has the useful (and sometimes not useful) feature of fetching data when requested.  It can make your life easier or it can make it a big pain.  So what does this have to do with WCF?  One word: Serialization. When you need to pass data over the wire with WCF, the data contract is serialized into either XML or binary depending on the binding you are using.  Well, if I am using lazy loading, the Person entity gets serialized and during that process, the Addresses collection is accessed.  When that happens, the Address data is lazy loaded.  Then the Address is serialized, and the Person property is accessed, and then also serialized and then the Addresses collection is accessed.  Now the second time through, lazy loading doesn’t kick in, but you can see the infinite loop caused by this process.  This is a problem with any serialization, but I personally found it trying to use WCF. The fix for this is to simply turn off lazy Loading.  This can be done at each call by using context options: context.ContextOptions.LazyLoadingEnabled = false; Turning lazy loading off will now allow your classes to be serialized properly.  Note, this is if you are using the standard Entity Framework classes.  If you are using POCO,  you will have to do something slightly different.  With POCO, the Entity Framework will create proxy classes by default that allow things like lazy loading to work with POCO.  This proxy basically creates a proxy object that is a full Entity Framework object that sits between the context and the POCO object.  When using POCO with WCF (or any serialization) just turning off lazy loading doesn’t cut it.  You have to turn off the proxy creation to ensure that your classes will serialize properly: context.ContextOptions.ProxyCreationEnabled = false; The nice thing is that you can do this on a call-by-call basis.  If you use a new context for each set of operations (which you should) then you can turn either lazy loading or proxy creation on and off as needed.

    Read the article

  • Unity 3d support for multiple X-screens

    - by stewbond
    I've installed Ubuntu 12.04 this weekend and have had problems getting Unity3d to work with my triple monitor set-up. I've installed the latest nVidia drivers for my 2 nVidia video cards and have used NVIDIA X Server Settings to configure everything. If I stick each monitor on a separate X screen, all but the primary X screen will appear white and the cursor will be a black X. I can start a terminal on this screen but cannot drag other windows to it. If I kill the "Nautilus" process, the white background disappears and I see my desktop background, but my cursor is still an X and I still cannot drag windows to it. If I enable TwinView, I can get one screen on two monitors to work properly, but the white screen remains on my third monitor. In addition, I don't like using TwinView because my full-screen applications get stretched. My current solution is to "Enable Xinerama", use all separate X screens and revert to Unity2d. I'd love a solution to this as Ubuntu 12.10 is not planned to support Unity-2d and prefer the aesthetics of 3d anyways. I can provide Xorg.conf for all configurations and any other suggested diagnostic information. Below is my closest-to-working xorg.conf: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@zirconium) Fri Mar 30 13:38:49 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" RightOf "Screen2" Screen 1 "Screen1" RightOf "Screen0" Screen 2 "Screen2" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "1" EndSection Section "Files" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Microvitec PLC MV191" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" ModelName "CRT-1" HorizSync 28.0 - 55.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor2" VendorName "Unknown" ModelName "Microvitec PLC MV191" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor3" VendorName "Unknown" ModelName "Microvitec PLC MV191" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTS 450" BusID "PCI:1:0:0" Screen 0 EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 9500 GT" BusID "PCI:2:0:0" EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTS 450" BusID "PCI:1:0:0" Screen 1 EndSection Section "Device" Identifier "Device3" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTS 450" BusID "PCI:1:0:0" Screen 2 EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP-0: 1280x1024_75 +0+0; DFP-0: 1280x1024 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen2" Device "Device2" Monitor "Monitor2" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP-2: 1280x1024_75 +0+0; DFP-2: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen3" Device "Device3" Monitor "Monitor3" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP-2: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Disable" EndSection Other people have had similar issues. I haven't been successful at any of the few suggestions submitted. Can not get Dual Monitors to work on Different GPUs http://askubuntu.com/questions/30412/3-monitors-with-2-video-cards-not-working?rq=1 How to get second display to work alongside primary display? http://askubuntu.com/questions/148007/multiple-monitors-only-work-with-unity-2d/201086#201086 xorg.conf and Unity3D?

    Read the article

  • Employee Info Starter Kit: Project Mission

    - by Mohammad Ashraful Alam
    Employee Info Starter Kit is an open source ASP.NET project template that is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, it illustrates how to utilize Microsoft ASP.NET 4.0, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule. where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. User Stories The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee record Update an existing employee record Delete existing employee records Key Technology Areas ASP.NET 4.0 Entity Framework 4.0 T-4 Template Visual Studio 2010 Architectural Objective There is no universal architecture which can be considered as the best for all sorts of applications around the world. Based on requirements, constraints, environment, application architecture can differ from one to another. Trade-off factors are one of the important considerations while deciding a particular architectural solution. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. “Productivity” as the architectural objective typically also includes other trade-off factors as well as, such as testability, flexibility, performance etc. Fortunately Microsoft .NET Framework 4.0 and Visual Studio 2010 includes lots of great features that have been implemented cleverly in this project to reduce these trade-off factors in the minimum level. Why Employee Info Starter Kit is Not a Framework? Application frameworks are really great for productivity, some of which are really unavoidable in this modern age. However relying too many frameworks may overkill a project, as frameworks are typically designed to serve wide range of different usage and are less customizable or editable. On the other hand having implementation patterns can be useful for developers, as it enables them to adjust application on demand. Employee Info Starter Kit provides hundreds of “connected” snippets and implementation patterns to demonstrate problem solutions in actual production environment. It also includes Visual Studio T-4 templates that generate thousands lines of data access and business logic layer repetitive codes in literally few seconds on the fly, which are fully mock testable due to language support for partial methods and latest support for mock testing in Entity Framework. Why Employee Info Starter Kit is Different than Other Open-source Web Applications? Software development is one of the rapid growing industries around the globe, where the technology is being updated very frequently to adapt greater challenges over time. There are literally thousands of community web sites, blogs and forums that are dedicated to provide support to adapt new technologies. While some are really great to enable learning new technologies quickly, in most cases they are either too “simple and brief” to be used in real world scenarios or too “complex and detailed” which are typically focused to achieve a product goal (such as CMS, e-Commerce etc) from "end user" perspective and have a long duration learning curve with respect to the corresponding technology. Employee Info Starter Kit, as a web project, is basically "developer" oriented which actually considers a hybrid approach as “simple and detailed”, where a simple domain has been considered to intentionally illustrate most of the architectural and implementation challenges faced by web application developers so that anyone can dive into deep into the corresponding new technology or concept quickly. Roadmap Since its first release by 2008 in MSDN Code Gallery, Employee Info Starter Kit gained a huge popularity in ASP.NET community and had 1, 50,000+ downloads afterwards. Being encouraged with this great response, we have a strong commitment for the community to provide support for it with respect to latest technologies continuously. Currently hosted in Codeplex, this community driven project is planned to have a wide range of individual editions, each of which will be focused on a selected application architecture, framework or platform, such as ASP.NET Webform, ASP.NET Dynamic Data, ASP.NET MVC, jQuery Ajax (RIA), Silverlight (RIA), Azure Service Platform (Cloud), Visual Studio Automated Test etc. See here for full list of current and future editions.

    Read the article

  • A View from the Top – Jan Ackerman (VP APAC Recruiting)

    - by user769227
    This week, Headhunt Magazine in Singapore, took the opportunity to publish an interview with Jan Ackerman who is Vice President for Recruitment for Asia Pacific here at Oracle. The link to the online interview can be found here. Below is the interview in full that was published in Headhunt Magazine.  A View from the Top – Jan Ackerman Written by HeadHunt on August 16, 2012 · Leave a Comment By Susheela Menon Jan Ackerman is the Vice President for Recruiting in Asia Pacific and Japan at Oracle. Which particular personal trait do you attribute your professional success to? Perseverance has been the most important trait that has attributed to my professional success. Endurance and perseverance combined to win in the end has always been a great credo. I find that this trait carries through in my professional as well as my personal life. I enjoy sport fishing and find that perseverance with a great deal of patience in this hobby is critical to the overall enjoyment and success in this sporting activity. In the same way, this doggedness – steadfastness with persistence – and tenacity toward an unyielding course of action has served me well in reaching goals and thus greater success. What’s the biggest challenge you have faced in your career so far? I have to constantly keep pace with ever changing technology in my career. The industry changes rapidly and requires me to stay on top of the latest trends and advancements. Outside of work, I like to develop software as a hobby and in order to ensure that what I am developing will meet what the business needs, I have to continually innovate and stay current on the latest trends in the industry to deliver a solution that will delight the end- user. Best career advice you have ever received. Always be forthright and honest with your customers and peers; mixed with a “Can Do” attitude, a great and fulfilling career can be yours to have and hold. What makes Oracle a great place to be in? The freedom to innovate and pave new avenues of success is one of the greatest things about working here at Oracle. We are always looking to grow and improve our business for our customers and we are always adapting to present and future industry demands. This means we are always looking to change, to perform better and to do things differently. All these create a culture and spirit of innovation and success. What motivates you to be in the HR sector? I really like working with and helping people. HR is all about “the people” in the organisation, and staying focused every day on making things better for the Oracle team gives me a great deal of happiness. Describe your leadership style. I am very direct and goal- oriented. I provide ideas and guidance and then give the team all the freedom they need to reach a successful outcome. I can also be a very “roll up your sleeves” kind of manager when the task needs a bit of a push. What’s the biggest business challenge you see in your industry right now? The ability to keep pace with all the convergence in the industry and to continue to stay focused on delivering top talent to serve Oracle’s customers well. Our unique Recruiting Model has served us well in meeting these needs. We are well-placed in this goal and look forward to maintain Oracle’s leadership role in the industry.

    Read the article

  • Next Generation Mobile Clients for Oracle Applications & the role of Oracle Fusion Middleware

    - by Manish Palaparthy
    Oracle Enterprise Applications have been available with modern web browser based interfaces for a while now. The web browsers available in smart phones no longer require special markup language such as WML since the processing power of these handsets is quite near to that of a typical personal computer. Modern Mobile devices such as the IPhone, Android Phones, BlackBerry, Windows 8 devices can now render XHTML & HTML quite well. This means you could potentially use your mobile browser to access your favorite enterprise application. While the Mobile browser would render the UI, you might find it difficult to use it due to the formatting & Presentation of the Native UI. Smart phones offer a lot more than just a powerful web browser, they offer capabilities such as Maps, GPS, Multi touch, pinch zoom, accelerometers, vivid colors, camera with video, support for 3G, 4G networks, cloud storage, NFC, streaming media, tethering, voice based features, multi tasking, messaging, social networking web browsers with support for HTML 5 and many more features.  While the full potential of Enterprise Mobile Apps is yet to be realized, Oracle has published a few of its applications that take advantage of the above capabilities and are available for the IPhone natively. Here are some of them Iphone Apps  Oracle Business Approvals for Managers: Offers a highly intuitive user interface built as a native mobile application to conveniently access pending actions related to expenses, purchase requisitions, HR vacancies and job offers. You can even view BI reports related to the worklist actions. Works with Oracle E-Business Suite Oracle Business Indicators : Real-time secure access to OBI reports. Oracle Business Approvals for Sales Managers: Enables sales executives to review key targeted tasks, access relevant business intelligence reports. Works with Siebel CRM, Siebel Quote & Order Capture. Oracle Mobile Sales Assistant: CRM application that provides real-time, secure access to the information your sales organization needs, complete frequent tasks, collaborate with colleagues and customers. Works with Oracle CRMOracle Mobile Sales Forecast: Designed specifically for the mobile business user to view key opportunities. Works with Oracle CRM on demand Oracle iReceipts : Part of Oracle PeopleSoft Expenses, which allows users to create and submit expense lines for cash transactions in real-time. Works with Oracle PeopleSoft expenses Now, we have seen some mobile Apps that Oracle has published, I am sure you are intrigued as to how develop your own clients for the use-cases that you deem most fit. For that Oracle has ADF Mobile ADF Mobile You could develop Mobile Applications with the SDK available with the smart phone platforms!, but you'd really have to be a mobile ninja developer to develop apps with the rich user experience like the ones above. The challenges really multiply when you have to support multiple mobile devices. ADF Mobile framework is really handy to meet this challenge ADF Mobile can in be used to Develop Apps for the Mobile browser : An application built with ADF Mobile framework installs on a smart device, renders user interface via HTML5, and has access to device services. This means the programming model is primarily web-based, which offers consistency with other enterprise applications as well as easier migration to new platforms. Develop Apps for the Mobile Client (Native Apps): These applications have access to device services, enabling a richer experience for users than a browser alone can offer. ADF mobile enables rapid and declarative development of rich, on-device mobile applications. Developers only need to write an application once and then they can deploy the same application across multiple leading smart phone platforms. Oracle SOA Suite Although the Mobile users are using the smart phone apps, and actual transactions are being executed in the underlying app, there is lot of technical wizardry that is going under the surface. All of this key technical components to make 1. WebService calls 2. Authentication 3. Intercepting Webservice calls and adding security credentials to the request 4. Invoking the services of the enterprise application 5. Integrating with the Enterprise Application via the Adapter is all being implemented at the SOA infrastructure layer.  As you can see from the above diagram. The key pre-requisites to mobile enable an Enterprise application are The core enterprise application Oracle SOA Suite ADF Mobile

    Read the article

  • Unit Testing DateTime – The Crazy Way

    - by João Angelo
    We all know that the process of unit testing code that depends on DateTime, particularly the current time provided through the static properties (Now, UtcNow and Today), it’s a PITA. If you go ask how to unit test DateTime.Now on stackoverflow I’ll bet that you’ll get two kind of answers: Encapsulate the current time in your own interface and use a standard mocking framework; Pull out the big guns like Typemock Isolator, JustMock or Microsoft Moles/Fakes and mock the static property directly. Now each alternative has is pros and cons and I would have to say that I glean more to the second approach because the first adds a layer of abstraction just for the sake of testability. However, the second approach depends on commercial tools that not every shop wants to buy or in the not so friendly Microsoft Moles. (Sidenote: Moles is now named Fakes and it will ship with VS 2012) This tends to leave people without an acceptable and simple solution so after reading another of these types of questions in SO I came up with yet another alternative, one based on the first alternative that I presented here but tries really hard to not get in your way with yet another layer of abstraction. So, without further dues, I present you, the Tardis. The Tardis is single section of conditionally compiled code that overrides the meaning of the DateTime expression inside a single class. You still get the normal coding experience of using DateTime all over the place, but in a DEBUG compilation your tests will be able to mock every static method or property of the DateTime class. An example follows, while the full Tardis code can be downloaded from GitHub: using System; using NSubstitute; using NUnit.Framework; using Tardis; public class Example { public Example() : this(string.Empty) { } public Example(string title) { #if DEBUG this.DateTime = DateTimeProvider.Default; this.Initialize(title); } internal IDateTimeProvider DateTime { get; set; } internal Example(string title, IDateTimeProvider provider) { this.DateTime = provider; #endif this.Initialize(title); } private void Initialize(string title) { this.Title = title; this.CreatedAt = DateTime.UtcNow; } private string title; public string Title { get { return this.title; } set { this.title = value; this.UpdatedAt = DateTime.UtcNow; } } public DateTime CreatedAt { get; private set; } public DateTime UpdatedAt { get; private set; } } public class TExample { public void T001() { // Arrange var tardis = Substitute.For<IDateTimeProvider>(); tardis.UtcNow.Returns(new DateTime(2000, 1, 1, 6, 6, 6)); // Act var sut = new Example("Title", tardis); // Assert Assert.That(sut.CreatedAt, Is.EqualTo(tardis.UtcNow)); } public void T002() { // Arrange var tardis = Substitute.For<IDateTimeProvider>(); var sut = new Example("Title", tardis); tardis.UtcNow.Returns(new DateTime(2000, 1, 1, 6, 6, 6)); // Act sut.Title = "Updated"; // Assert Assert.That(sut.UpdatedAt, Is.EqualTo(tardis.UtcNow)); } } This approach is also suitable for other similar classes with commonly used static methods or properties like the ConfigurationManager class.

    Read the article

  • Bridging the Gap in Cloud, Big Data, and Real-time

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} With all the buzz of around big data and cloud computing, it is easy to overlook one of your most precious commodities—your data. Today’s businesses cannot stand still when it comes to data. Market success now depends on speed, volume, complexity, and keeping pace with the latest data integration breakthroughs. Are you up to speed with big data, cloud integration, real-time analytics? Join us in this three part blog series where we’ll look at each component in more detail. Meet us online on October 24th where we’ll take your questions about what issues you are facing in this brave new world of integration. Let’s start first with Cloud. What happens with your data when you decide to implement a private cloud architecture? Or public cloud? Data integration solutions play a vital role migrating data simply, efficiently, and reliably to the cloud; they are a necessary ingredient of any platform as a service strategy because they support cloud deployments with data-layer application integration between on-premise and cloud environments of all kinds. For private cloud architectures, consolidation of your databases and data stores is an important step to take to be able to receive the full benefits of cloud computing. Private cloud integration requires bidirectional replication between heterogeneous systems to allow you to perform data consolidation without interrupting your business operations. In addition, integrating data requires bulk load and transformation into and out of your private cloud is a crucial step for those companies moving to private cloud. In addition, the need for managing data services as part of SOA/BPM solutions that enable agile application delivery and help build shared data services for organizations. But what about public Cloud? If you have moved your data to a public cloud application, you may also need to connect your on-premise enterprise systems and the cloud environment by moving data in bulk or as real-time transactions across geographies. For public and private cloud architectures both, Oracle offers a complete and extensible set of integration options that span not only data integration but also service and process integration, security, and management. For those companies investing in Oracle Cloud, you can move your data through Oracle SOA Suite using REST APIs to Oracle Messaging Cloud Service —a new service that lets applications deployed in Oracle Cloud securely and reliably communicate over Java Messaging Service . As an example of loading and transforming data into other public clouds, Oracle Data Integrator supports a knowledge module for Salesforce.com—now available on AppExchange. Other third-party knowledge modules are being developed by customers and partners every day. To learn more about how to leverage Oracle’s Data Integration products for Cloud, join us live: Data Integration Breakthroughs Webcast on October 24th 10 AM PST.

    Read the article

  • Is SugarCRM really adequate for custom development (or adequate at all)? [closed]

    - by dukeofgaming
    Have you used SugarCRM for custom development successfully?, if so, have you done it programmatically or through the Module Builder? Were you successful? If not, why? I used SugarCRM for a project about two years ago, I ran into errors from the very installation, having to hack the actual installation file to deploy the software in the server and other erros that I can't recall now. Two years after, I'm picking it up for a project once again. I'm feeling like I should have developed the whole thing from scratch myself. Some examples: I couldn't install it in the server (again). I had to install it locally, then copy the files and database over to the server and manually edit the config file. Constantly getting deployment errors from the module builder. One reason is SugarCRM keeps creating a record in the upgrade_history table for a file that does not exist, I keep deleting such record and it keeps coming back corrupt. I get other deployment errors, but have not figured them out. then I have to rollback all files and database to try again. I deleted a custom module with relationships, the relationships stayed in the other modules and cannot be deleted anymore, PHP warnings all over the place. Quick create for custom modules does not appear, hack needed. Its whole cache directory is a joke, permanent data/files are stored there. The module builder interface disappears required fields. Edit the wrong thing, module builder won't deploy again, then pray Quick Repair and/or Rebuild Relationships do the trick. My impression of SugarCRM now is that, regardless of its pretty exterior and apparent functionality, it is a very low quality piece of software. This even scared me more: http://amplicate.com/hate/sugarcrm; a quote: I wis this info had been available when I tried to implement it 2 years ago... I searched high and low and the only info I found was positive. Yes, it's a piece of crap. The community edition was full of bugs... nothing worked. Essentially I got fired for implementing it. I'm glad though, because now I work for myself, am much happier and make more money... so, I should really thank SugarCRM for sucking so much I guess! I figured that perhaps some of you have had similar experiences, and have either sticked with SugarCRM or moved on to another solution. I'm very interested in knowing what your resolutions were -or your current situations are- to make up my own mind, since the project I'm working on is long term and I'm feeling SugarCRM will be more an obstacle than an aid. After further failed attempts to continue using this software I continued to stumble upon dead-ends when using the module editor, I could only recover from this errors by using version control. We are now moving on to a custom implementation using Symfony; perhaps if we were using it with its out-of-the-box modules we would have sticked with it.

    Read the article

  • Healthcare and Distributed Data Don't Mix

    - by [email protected]
    How many times have you heard the story?  Hard disk goes missing, USB thumb drive goes missing, laptop goes missing...Not a week goes by that we don't hear about our data going missing...  Healthcare data is a big one, but we hear about credit card data, pricing info, corporate intellectual property...  When I have spoken at Security and IT conferences part of my message is "Why do you give your users data to lose in the first place?"  I don't suggest they can't have access to it...in fact I work for the company that provides the premiere data security and desktop solutions that DO provide access.  Access isn't the issue.  'Keeping the data' is the issue.We are all human - we all make mistakes... I fault no one for having their car stolen or that they dropped a USB thumb drive. (well, except the thieves - I can certainly find some fault there)  Where I find fault is in policy (or lack thereof sometimes) that allows users to carry around private, and important, data with them.  Mr. Director of IT - It is your fault, not theirs.  Ms. CSO - Look in the mirror.It isn't like one can't find a network to access the data from.  You are on a network right now.  How many Wireless ones (wifi, mifi, cellular...) are there around you, right now?  Allowing employees to remove data from the confines of (wait for it... ) THE DATA CENTER is just plain indefensible when it isn't required.  The argument that the laptop had a password and the hard disk was encrypted is ridiculous.  An encrypted drive tells thieves that before they sell the stolen unit for $75, they should crack the encryption and ascertain what the REAL value of the laptop is... credit card info, Identity info, pricing lists, banking transactions... a veritable treasure trove of info people give away on an 'encrypted disk'.What started this latest rant on lack of data control was an article in Government Health IT that was forwarded to me by Denny Olson, an Oracle Principal Sales Consultant in Minnesota.  The full article is here, but the point was that a couple laptops went missing in a couple different cases, and.. well... no one knows where the data is, and yes - they were loaded with patient info.  What were you thinking?Obviously you can't steal data form a Sun Ray appliance... since it has no data, nor any storage to keep the data on, and Secure Global Desktop allows access from Macs, Linux and Windows client devices...  but in all cases, there is no keeping the data unless you explicitly allow for it in your policy.   Since you can get at the data securely from any network, why would you want to take personal responsibility for it?  Both Sun Rays and Secure Global Desktop are widely used in Healthcare... but clearly not widely enough.We need to do a better job of getting the message out -  Healthcare (or insert your business type here) and distributed data don't mix. Then add Hot Desking and 'follow me printing' and you have something that Clinicians (and CSOs) love.Thanks for putting up my blood pressure, Denny.

    Read the article

  • OS Development. Only Few Particular Questions

    - by Total Anime Immersion
    I am new to this site as a member but have consulted its answers quite a lot of times. Besides my questions regarding OS Development hasn't been answered in any forum. In OS Dev. we make a bootloader. The org point is 7C00H. Why so? Why not 0000h? What are the last two signatures in the bootloader used for? People on every forum have answered that it is important for the system to recognize it as a bootable media. But I want a specific answer. What do each of those signatures do. I have the basic concept of a kernel. Point is.. it relates to different files required in a system. It sort of binds up everything that is individually developed. Now the thing is that that I have floating ideas in my mind regarding different aspects like keyboard, mouse, etc.. how do I put them all together? Which should I start with first? If possible please provide a step by step procedure of the startups of the kernel. Suppose I have developed my language entirely in C and Assembly. Now questions is will exe files work on my system.. if it doesn't then I have to create my own files and publish them. Which is a bad idea.. next step would be for me to go for a compiler for a language which I have designed myself. Now the point is.. How do I implement the compiler into my OS? After all this my final question is that.. How do you go about multitasking and multithreading? and I don't want to use int 21h as its dos specific.. how do I go about making files, renaming them, etc. and all assembly books teach 16 but programming.. how do i go about doing 32 bit or 64 bit with the knowledge I have.. if the basics and instructions are the same.. I don't mind.. but how do i go about otherwise? Don't tell me to give up the idea because I WON'T. And don't tell me it's too complex because I have a sharp knowledge of working of a system, C, Java, Assembly, C++ and python, C#, visual basic.. and not just basics but full fledged api developments.. but I really want to go deep into the systems part.. so I want professional help.. And I have gone through many OS project files but I want help particularly from this site as there are people with knowledge depth who can guide me the right way. And please don't suggest any books above 20$ and they should be available on flipkart as amazon charges massively for shipping and I prefer free shipping from flipkart.

    Read the article

  • HPCM 11.1.2.x - Outline Optimisation for Calculation Performance

    - by Jane Story
    When an HPCM application is first created, it is likely that you will want to carry out some optimisation on the HPCM application’s Essbase outline in order to improve calculation execution times. There are several things that you may wish to consider. Because at least one dense dimension for an application is required to deploy from HPCM to Essbase, “Measures” and “AllocationType”, as the only required dimensions in an HPCM application, are created dense by default. However, for optimisation reasons, you may wish to consider changing this default dense/sparse configuration. In general, calculation scripts in HPCM execute best when they are targeting destinations with one or more dense dimensions. Therefore, consider your largest target stage i.e. the stage with the most assignment destinations and choose that as a dense dimension. When optimising an outline in this way, it is not possible to have a dense dimension in every target stage and so testing with the dense/sparse settings in every stage is the key to finding the best configuration for each individual application. It is not possible to change the dense/sparse setting of individual cloned dimensions from EPMA. When a dimension that is to be repeated in multiple stages, and therefore cloned, is defined in EPMA, every instance of that dimension has the same storage setting. However, such manual changes may not be preserved in all cases. Please see below for full explanation. However, once the application has been deployed from EPMA to HPCM and from HPCM to Essbase, it is possible to make the dense/sparse changes to a cloned dimension directly in Essbase. This can be done by editing the properties of the outline in Essbase Administration Services (EAS) and manually changing the dense/sparse settings of individual dimensions. There are two methods of deployment from HPCM to Essbase from 11.1.2.1. There is a “replace” deploy method and an “update” deploy method: “Replace” will delete the Essbase application and replace it. If this method is chosen, then any changes made directly on the Essbase outline will be lost. If you use the update deploy method (with or without archiving and reloading data), then the Essbase outline, including any manual changes you have made (i.e. changes to dense/sparse settings of the cloned dimensions), will be preserved. Notes If you are using the calculation optimisation technique mentioned in a previous blog to calculate multiple POVs (https://blogs.oracle.com/pa/entry/hpcm_11_1_2_optimising) and you are calculating all members of that POV dimension (e.g. all months in the Period dimension) then you could consider making that dimension dense. Always review Block sizes after all changes! The maximum block size recommended in the Essbase Database Administrator’s Guide is 100k for 32 bit Essbase and 200k for 64 bit Essbase. However, calculations may perform better with a larger than recommended block size provided that sufficient memory is available on the Essbase server. Test different configurations to determine the most optimal solution for your HPCM application. Please note that this blog article covers HPCM outline optimisation only. Additional performance tuning can be achieved by methodically testing database settings i.e data cache, index cache and/or commit block settings. For more information on Essbase tuning best practices, please review these items in the Essbase Database Administrators Guide. For additional information on the commit block setting, please see the previous PA blog article https://blogs.oracle.com/pa/entry/essbase_11_1_2_commit

    Read the article

  • Oracle Announces Leading ISV Integration With Oracle Sales and Marketing Cloud Service

    - by Richard Lefebvre
    More Than 100 ISVs, including Big Machines, Marketo and Xactly, now Provide Integrated Offerings to Help Maximize Sales and Single Customer Viewpoint Demonstrating its continued commitment to business value via open standards and the cloud, Oracle today announced that more than 100 leading ISVs are integrating in the cloud with Oracle Sales and Marketing Cloud Service, a service available through Oracle Cloud. For the first time Oracle Sales and Marketing Cloud Service users can choose from a wide array of directly integrated third-party solutions, providing a new level of choice, seamless deployment and single view of customers with preferred implementations. Top partners, including ActivePrime, Avaya, BigMachines, Box, Brainshark, Callidus Software, CirrusPath, Clicktools, CRMIT, DBSync, EchoSign from Adobe, Eloqua, Fliptop, FPX, HarQen, HubSpot, iHance, InsideSales.com, InsideView, Interactive Intelligence, Lingotek, LinkPoint360, Marketo, Nuance, PerspecSys, Postcode Anywhere, Revegy, salesElement, StrikeIron, upsourceIT, White Springs, X+1 and Xactly, have announced their availability and integration today. By integrating with Oracle Sales and Marketing Cloud Service, ISV solutions can easily be leveraged by customersBy choosing Oracle Sales and Marketing Cloud Service as a sales platform, customers will continue to have complete choice of their own quoting, lead management and sales methodology solutions and it will all be pre-integrated with Oracle Sales and Marketing Cloud Service. With demonstrable integration fusing standards-based technologies, such as SOAP web services, Oracle Sales and Marketing Cloud Service customers choosing ISV integrations will also benefit from familiar ease-of-use and the Oracle Sales and Marketing Cloud ervice user interface, including buttons, links and custom objects for a rich user experience. ISV integration with Oracle Sales and Marketing Cloud Service also enables on-demand contextual data exchange capabilities, linking Oracle Sales and Marketing Cloud Service business data with third-party application data for a complete CRM view. ISVs building robust, repeatable integrations with Oracle Sales and Marketing Cloud Service can begin the process of achieving Oracle Validated Integration, an Oracle PartnerNetwork program that recognizes Oracle partner solutions with proven integration to Oracle Applications. ISVs can learn more about Oracle Validated Integration    here. For customers, Oracle Validated Integration means that a partner’s integration has been tested and validated as functionally and technically sound, that the partner solution is integrated with Oracle Sales and Marketing Cloud Service in a reliable, standardized way, and that the integration operates and performs as documented. Oracle Cloud provides a broad portfolio of Platform Services, Application Services, and Social Services, all on a subscription basis. Oracle Cloud delivers instant value and productivity for end users, administrators, and developers through functionally rich, integrated, secure, enterprise cloud services. Supporting Quotes “BigMachines is a leader in Configure, Price, and Quote solutions in the Cloud. Our solution delivers accurate quotes directly from an opportunity, integrated with the leading Oracle Sales and Marketing Cloud application from Oracle,” says John Pulling, Senior Vice President of Products at Big Machines. “Together, Big Machines and Oracle efficiently automate changes, enabling a faster, more efficient sales process for our joint customers.”   ”Modern marketing and sales must engage customers and prospects in real time across the web, email, social media, online and offline channels to understand where and how to allocate their budgets for maximum return,” said Srini Venkatesan, Senior VP, Products and Engineering at Marketo. “Alignment and integration with Oracle Sales and Marketing Cloud Service allows Marketo’s solutions to deliver innovative capabilities for sales and marketing to adapt and grow their business on the core Oracle platform for CRM.”   “Sales incentives are the best way to drive better performance. Well managed incentives improve the bottom line, particularly when combined with effective sales systems,” said Christopher Cabrera, president and CEO of Xactly Corporation. “With Oracle Sales and Marketing Cloud Service and Xactly working together, customers gain insight and efficiencies. The combination can create more effective compensation programs, while motivating sales to work to its full potential."   “The tremendous integration of leading ISVs with Oracle Sales and Marketing Cloud Service is a testament to the undeniable business value and demand from customers,” said Anthony Lye, SVP of Oracle CRM. “Oracle Sales and Marketing Cloud Service continues to define the industry, and we are proud to work with these leading ISVs to help users simultaneously maximize sales and revenue and extend their current deployments for a deeper and single customer viewpoint.” Supporting Resources Oracle Sales and Marketing Cloud Service Learn More About Oracle Cloud

    Read the article

  • Surface V2.0

    - by Dennis Vroegop
    It’s been quiet around here. And the reason for that is that it’s been quiet around Surface for a while. Now, a lot of people assume that when a product team isn’t making too much noise that must mean they stopped working on their product. Remember the PDC keynote in 2010? Just because they didn’t mention WPF there a lot of people had the idea that WPF was dead and abandoned for Silverlight. Of course, this couldn’t be farther from the truth. The same applies to Surface. While we didn’t hear much from the team in Redmond they were busy putting together the next version of the platform. And at the CES in January the world saw what they have been up to all along: Surface V2.0 as it’s commonly known. Of course, the product is still in development. It’s not here yet, we can’t buy one yet. However, more and more information comes available and I think this is a good time to share with you what it’s all about! The biggest change from an organizational point of view is that Microsoft decided to stop producing the hardware themselves. Instead, they have formed a partnership with Samsung who will manufacture the devices. This means that you as a buyer get the benefits of a large, worldwide supplier with all the services they can offer. Not that Microsoft didn’t do that before but since Surface wasn’t a ‘big’ product it was sometimes hard to get to the right people. The new device is officially called the “Samsung SUR 40 for Microsoft Surface” which is quite a mouthful. The software that runs the device is of course still coming from Microsoft. Let’s dive into the technical specs (note: all of this is preliminary, it’s still in the Alpha phase!): Audio out HDMI / StereoRCA / SPDIF / 2 times 3.5mm audio out jack Brightness 300 CD/m2 Communications 1GB Ethernet/802.11/Bluetooth Contrast Ratio 1:1000 CPU AMD Athlon X2 245e 2.9Ghz Dual Core Display Resolution Full HD 1080p 1920x1080 / 16:9 aspect ratio GPU AMD Radeon HD 6750 1GB GDDRS HDD 320 GB / 7200 RPM HDMI In / HDMI out Yes I/O Ports 4 USB, SD Card reader Operation System Embedded Windows 7 Professional 64 bits Panel Size 40” diagonal Protection Glass Gorilla Glass RAM 4 GB DD3 Weight / with standard legs 70.0 Kg / 154 lbs Weight / standalone 39.5 Kg / 87 lbs Height (without legs) 4 inch Contact points recognized > 50 Cool Factor Extremely   Ok, the last point is not official, but I do think it needs to be there. Let’s talk software. As noted, it runs Windows 7 Professional 64 bit, which means you can run Visual Studio 2010 on it. The software is going to be developed in WPF4.0 with the additional Surface SDK 2.0. It will contain all the things you’ve seen before plus some extra’s. They have taken some steps to align it more with the Surface Toolkit which you can download today, so if you do things right your software should be portable between a WPF4.0 Windows 7 Multi-touch app and the Surface v2 environment. It still uses infrared to detect contacts, so in that respect nothing much has changed conceptually. We still can differentiate between a finger, a tag or a blob. Of course, since the new platform has a much higher resolution (compared to the 1024x768 of the first version) you might need to look at your code again. I’ve seen a lot of applications on Surface that assume the old resolution and moving that to V2 is going to be some work. To be honest: as I am under NDA I cannot disclose much about the new software besides what I have told you here, but trust me: it’s going to blow people away. Now, the biggest question for me is: when can I get one? Until we can, have a look here: Tags van Technorati: surface,samsung,WPF

    Read the article

  • Laptop screen blank after login when external monitor is not connected

    - by Ramon Suarez
    Ubuntu does not switch back automatically to only monitor present when booting after disconnecting external monitor. Here's a video showing what happens. I get to the login window and everything looks ok, then I type my password, the desktop image shows up and... everything goes blank. It does not happen when I just login as a guest. When possible I work with my laptop connected to an external screen via the VGA port. The problem comes when I boot the computer without that secondary screen connected: The login screen comes out ok. After login the screen goes black, but I can hear the login sound. If I hit ctr + alt + backwards-delete and login again sometimes it is fixed, but not all. If I log in as a different user everything is OK. Then I log in as my user and sometimes it works. To have a screen I have to plug a monitor. Although I have turned on the laptop display with that monitor on, if I reboot it goes blank again after login, even if I turn off the external monitor before turning off the computer. I've managed to get my screen back with my username after going into recovery mode, but only sometimes. Failsafe would not load after second screen asking me what I wanted to do (no mouse to click nor keyboard working). My computer is a LDLC Aurore BB1-i5 -8 -S1. Which is the configuration file that keeps the information about the monitors using Displays under lightgdm and where is it? I guess if I could edit it I may have a chance :) One of the things I tried following a solution in another post was removing my monitors.xml file, but it does not work and I don't know how to create a good one that I could use now. When doing DISPLAY=:0 xrandrI get: Screen 0: minimum 320 x 200, current 320 x 200, maximum 8192 x 8192 LVDS1 connected (normal left inverted right x axis y axis) 1366x768 60.0 + 1360x768 59.8 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 disconnected (normal left inverted right x axis y axis) DP1 disconnected (normal left inverted right x axis y axis) This is the full dmesg after activating sudo xdiagnoseas Bryce sugested. (If you tell me the relevant parts I will paste them here) When conecting the external monitor, only the external will work, although I can see using Displays that the computer thinks that both are working. I've asked the question in Launchpad but have it keeps on expiring without any feedback. In my opinion Ubuntu should be able to detect automatically that there is no external monitor present and switch to the laptop monitor. There's a similar question here, but it does not apply to my case External monitor set as primary even when disconnected from laptop Update: For clarification, the problem happens only with my user and once I log in. I even get to see the screensaver for about a second, and then it goes blank. Tried Bryce's example (see his answer below), but it did not work. This is the info I get from tty1 with Display=:0 xrandr: – Ramon Suarez Jul 9 at 16:36 Screen 0: minimum 320 x 200, current 320 x 200, maximum 8192 x 8192 LVDS1 connected (normal left inverted right x axis y axis) 1366x768 60.0 + 1360x768 59.8 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 disconnected (normal left inverted right x axis y axis) DP1 disconnected (normal left inverted right x axis y axis)

    Read the article

  • i don't receive mail notification...Nagios Core 4

    - by alessio
    I have a problem with automatically mail notification in Nagios Core 4 installed on ubuntu 12 .04 lts server... i have tried to send mail with nagios user and root user with the command: echo "test" | mail -s "test mail" [email protected] and i received mail correctly... but i don't receive any automatically mail notification... i don't know how can i do to resolve this issue! :( these are my configuration files (commands.cfg, contacts.cfg, nagios.log, mail.log): commands.cfg (the path /usr/bin/mail is the right path): # 'notify-host-by-email' command definition define command{ command_name notify-host-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$ } # 'notify-service-by-email' command definition define command{ command_name notify-service-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$ } # 'process-host-perfdata' command definition define command{ command_name process-host-perfdata command_line /usr/bin/printf "%b" "$LASTHOSTCHECK$\t$HOSTNAME$\t$HOSTSTATE$\t$HOSTATTEMPT$\t$HOSTSTATETYPE$\t$HOSTEXECUTIONTIME$\t$HOSTOUTPUT$\t$HOSTPERFDATA$\n" >> /usr/local/nagios/var/host-perfdata.out } # 'process-service-perfdata' command definition define command{ command_name process-service-perfdata command_line /usr/bin/printf "%b" "$LASTSERVICECHECK$\t$HOSTNAME$\t$SERVICEDESC$\t$SERVICESTATE$\t$SERVICEATTEMPT$\t$SERVICESTATETYPE$\t$SERVICEEXECUTIONTIME$\t$SERVICELATENCY$\t$SERVICEOUTPUT$\t$SERVICEPERFDATA$\n" >> /usr/local/nagios/var/service-perfdata.out } contacts.cfg: define contact{ contact_name supporto alias Supporto Clienti DEA service_notification_period 24x7 host_notification_period 24x7 service_notification_options w,u,c,r host_notification_options d,r service_notification_commands notify-service-by-email host_notification_commands notify-host-by-email email [email protected] } define contactgroup{ contactgroup_name admins alias Nagios Administrators members supporto } nagios.log: [1401871412] SERVICE ALERT: fileserver;Current Users;OK;SOFT;2;USERS OK - 1 users currently logged in [1401871953] SERVICE ALERT: backups;Nagios Status;WARNING;SOFT;1;NAGIOS WARNING: 36 processes, status log updated 541 seconds ago [1401872133] SERVICE ALERT: backups;Nagios Status;OK;SOFT;2;NAGIOS OK: 36 processes, status log updated 180 seconds ago [1401872321] SERVICE ALERT: posta;Swap Usage;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872322] SERVICE ALERT: fileserver;Current Users;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872420] SERVICE ALERT: archivio;Disk Space;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872492] SERVICE ALERT: fileserver;Current Users;OK;SOFT;2;USERS OK - 1 users currently logged in [1401872492] SERVICE ALERT: posta;Swap Usage;OK;SOFT;2;SWAP OK: 100% free (1984 MB out of 1984 MB) [1401872590] SERVICE ALERT: archivio;Disk Space;OK;SOFT;2;DISK OK [1401872931] Auto-save of retention data completed successfully. [1401873333] SERVICE ALERT: backups;Nagios Status;WARNING;SOFT;1;NAGIOS WARNING: 36 processes, status log updated 402 seconds ago [1401873513] SERVICE ALERT: backups;Nagios Status;OK;SOFT;2;NAGIOS OK: 36 processes, status log updated 180 seconds ago mail.log (i think that the problem is here but i don't know how to resolve it): Jun 4 10:00:01 backups sm-msp-queue[6109]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:01:01 backups sm-msp-queue[6109]: unable to qualify my own domain name (backups) -- using short name Jun 4 10:20:01 backups sm-msp-queue[7247]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:21:01 backups sm-msp-queue[7247]: unable to qualify my own domain name (backups) -- using short name Jun 4 10:40:01 backups sm-msp-queue[8327]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:41:01 backups sm-msp-queue[8327]: unable to qualify my own domain name (backups) -- using short name Jun 4 11:00:01 backups sm-msp-queue[9549]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 11:01:01 backups sm-msp-queue[9549]: unable to qualify my own domain name (backups) -- using short name Jun 4 11:20:01 backups sm-msp-queue[10678]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 11:21:01 backups sm-msp-queue[10678]: unable to qualify my own domain name (backups) -- using short name i'm at the last step and i want to finish this Nagios Core! :) Any help be appreciate!:) host definition (this host have the disk almost full and it is in hard state but non notification) : define host{ use generic-host ; Name of host template to use host_name posta alias Server Posta ESA address 10.10.2.102 parents xen1, xen2 icon_image redhat.png statusmap_image redhat.gd2 } service definition: define service{ use generic-service host_name xen1, maestro, xen2, posta, nas002, serv2, esasrvmi02, esaubuntumi service_description Disk Space check_command ssh_all_disks!10%!5% } Notification is allowed for the contact definition you gave, but is it also allowed at the the service level ? sorry but i don't understand this thing! :(

    Read the article

  • Inappropriate Updates?

    - by Tony Davis
    A recent Simple-talk article by Kathi Kellenberger dissected the fastest SQL solution, submitted by Peter Larsson as part of Phil Factor's SQL Speed Phreak challenge, to the classic "running total" problem. In its analysis of the code, the article re-ignited a heated debate regarding the techniques that should, and should not, be deemed acceptable in your search for fast SQL code. Peter's code for running total calculation uses a variation of a somewhat contentious technique, sometimes referred to as a "quirky update": SET @Subscribers = Subscribers = @Subscribers + PeopleJoined - PeopleLeft This form of the UPDATE statement, @variable = column = expression, is documented and it allows you to set a variable to the value returned by the expression. Microsoft does not guarantee the order in which rows are updated in this technique because, in relational theory, a table doesn’t have a natural order to its rows and the UPDATE statement has no means of specifying the order. Traditionally, in cases where a specific order is requires, such as for running aggregate calculations, programmers who used the technique have relied on the fact that the UPDATE statement, without the WHERE clause, is executed in the order imposed by the clustered index, or in heap order, if there isn’t one. Peter wasn’t satisfied with this, and so used the ingenious device of assuring the order of the UPDATE by the use of an "ordered CTE", based on an underlying temporary staging table (a heap). However, in either case, the ordering is still not guaranteed and, in addition, would be broken under conditions of parallelism, or partitioning. Many argue, with validity, that this reliance on a given order where none can ever be guaranteed is an abuse of basic relational principles, and so is a bad practice; perhaps even irresponsible. More importantly, Microsoft doesn't wish to support the technique and offers no guarantee that it will always work. If you put it into production and it breaks in a later version, you can't file a bug. As such, many believe that the technique should never be tolerated in a production system, under any circumstances. Is this attitude justified? After all, both forms of the technique, using a clustered index to guarantee the order or using an ordered CTE, have been tested rigorously and are proven to be robust; although not guaranteed by Microsoft, the ordering is reliable, provided none of the conditions that are known to break it are violated. In Peter's particular case, the technique is being applied to a temporary table, where the developer has full control of the data ordering, and indexing, and knows that the table will never be subject to parallelism or partitioning. It might be argued that, in such circumstances, the technique is not really "quirky" at all and to ban it from your systems would server no real purpose other than to deprive yourself of a reliable technique that has uses that extend well beyond the running total calculations. Of course, it is doubly important that such a technique, including its unsupported status and the assumptions that underpin its success, is fully and clearly documented, preferably even when posting it online in a competition or forum post. Ultimately, however, this technique has been available to programmers throughout the time Sybase and SQL Server has existed, and so cannot be lightly cast aside, even if one sympathises with Microsoft for the awkwardness of maintaining an archaic way of doing updates. After all, a Table hint could easily be devised that, if specified in the WITH (<Table_Hint_Limited>) clause, could be used to request the database engine to do the update in the conventional order. Then perhaps everyone would be satisfied. Cheers, Tony.

    Read the article

  • Give a session on C++ AMP – here is how

    - by Daniel Moth
    Ever since presenting on C++ AMP at the AMD Fusion conference in June, then the Gamefest conference in August, and the BUILD conference in September, I've had numerous requests about my material from folks that want to re-deliver the same session. The C++ AMP session I put together has evolved over the 3 presentations to its final form that I used at BUILD, so that is the one I recommend you base yours on. Please get the slides and the recording from channel9 (I'll refer to slide numbers below). This is how I've been presenting the C++ AMP session: Context (slide 3, 04:18-08:18) Start with a demo, on my dual-GPU machine. I've been using the N-Body sample (for VS 11 Developer Preview). (slide 4) Use an nvidia slide that has additional examples of performance improvements that customers enjoy with heterogeneous computing. (slide 5) Talk a bit about the differences today between CPU and GPU hardware, leading to the fact that these will continue to co-exist and that GPUs are great for data parallel algorithms, but not much else today. One is a jack of all trades and the other is a number cruncher. (slide 6) Use the APU example from amd, as one indication that the hardware space is still in motion, emphasizing that the C++ AMP solution is a data parallel API, not a GPU API. It has a future proof design for hardware we have yet to see. (slide 7) Provide more meta-data, as blogged about when I first introduced C++ AMP. Code (slide 9-11) Introduce C++ AMP coding with a simplistic array-addition algorithm – the slides speak for themselves. (slide 12-13) index<N>, extent<N>, and grid<N>. (Slide 14-16) array<T,N>, array_view<T,N> and comparison between them. (Slide 17) parallel_for_each. (slide 18, 21) restrict. (slide 19-20) actual restrictions of restrict(direct3d) – the slides speak for themselves. (slide 22) bring it altogether with a matrix multiplication example. (slide 23-24) accelerator, and accelerator_view. (slide 26-29) Introduce tiling incl. tiled matrix multiplication [tiling probably deserves a whole session instead of 6 minutes!]. IDE (slide 34,37) Briefly touch on the concurrency visualizer. It supports GPU profiling, but enhancements specific to C++ AMP we hope will come at the Beta timeframe, which is when I'll be spending more time talking about it. (slide 35-36, 51:54-59:16) Demonstrate the GPU debugging experience in VS 11. Summary (slide 39) Re-iterate some of the points of slide 7, and add the point that the C++ AMP spec will be open for other compiler vendors to implement, even on other platforms (in fact, Microsoft is actively working on that). (slide 40) Links to content – see slide – including where all your questions should go: http://social.msdn.microsoft.com/Forums/en/parallelcppnative/threads.   "But I don't have time for a full blown session, I only need 2 (or just 1, or 3) C++ AMP slides to use in my session on related topic X" If all you want is a small number of slides, you can take some from the session above and customize them. But because I am so nice, I have created some slides for you, including talking points in the notes section. Download them here. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • links for 2010-12-08

    - by Bob Rhubart
    What's a data architect? A comic dialog by one who knows: Oracle ACE Director Lewis Cunningham. Webcast: Oracle Business Intelligence Forum - December 15, 2010 at 9:00 am PT "The Oracle Business Intelligence Online Forum is a half-day virtual event that offers you a unique opportunity to see, in one place, the full portfolio of Oracle’s Business Intelligence (BI) offerings, and to learn what sets Oracle apart from the rest. Hear Oracle executives and industry analyst, Howard Dresner, present the current state of Business Intelligence, along with a series of customers who will share their case studies of putting analytics in action." Oracle Rolls Out Private Cloud Architecture And World-Record Transaction Performance | Forrester Blogs "Exadata has been dealt with extensively in other venues, both inside Forrester and externally, and appears to deliver the goods for I&O groups who require efficient consolidation and maximum performance from an Oracle database environment." -- Richard Fichera, Forrester Seven ways to get things started: Java EE Startup Classes with GlassFish and WebLogic "This is a blog about a topic that I realy don't like. But it comes across my ways over and over again and it's no doubt that you need it from time to time. Enough reasons for me to collect some information about it and publish it for your reference. I am talking about Startup-/Shutdown classes with Java EE applications or servers." -- Oracle ACE Director Markus "@myfear" Eisele." Monitoring Undelivered Messages in BPEL in SOA 10g (Antony Reynolds' Blog) "I am currently working with a client that wants to know how many undelivered messages they have, and if it reaches a certain threshold then they wants to alert the operator. To do this they plan on using the Enterprise Manager alert functions, but first they needs to know how many undelivered instances are out there." SOA author Antony Reynolds VirtualBox Appliances for Developers "Developers can simply download a few files, assemble them with a script , and then import and run the resulting pre-built VM in VirtualBox. This makes starting with these technologies even easier. Each appliance contains some Hands-On-Labs to start learning." -- Peter Paul van de Beek Oracle UCM 11g Remote Intradoc Client (RIDC) Integration with Oracle ADF 11g "It's great we have out of the box WebCenter ADF task flows for document management in UCM. However, for complete business scenario implementations, usually it's not enough and we need to manage Content Repository programmatically. This can be achieved through Remote Intradoc Client (RIDC) API. It's quite hard to find any practical information about this API, but I managed to get code for UCM folder creation/removal and folder information." -- Oracle ACE Director Andrejus Baranovskis Interview with Java Champion Matjaz B. Juric on Cloud Computing, SOA, and Java EE 6 "Matjaz Juric of Slovenia, head of the Cloud Computing and SOA Competence Centre at the University of Maribor, and professor at the University of Ljubljana, shares insights about cloud computing, SOA and Java EE 6." White Paper: Oracle Complex Event Processing High Availability "This whitepaper describes the high availability (HA) solutions available in Oracle CEP 11g Release 1 Patch Set 2 and  presents the results of a benchmark study demonstrating the performance of the Oracle CEP HA solutions."

    Read the article

  • Access Music from Amie Street in Boxee

    - by Mysticgeek
    One of our favorite sites for discovering new music is Amie Street. Today we take a look at the Amie Street app for Boxee that allows you to access your favorite tunes from the Boxee interface. Amie Street is a cool site that allows you to discover a lot of cool music from independent artists. What makes Amie Street unique is that most of the music starts out free, then the price goes up incrementally as its popularity grows. The Amie Street App for Boxee let’s you access music and playlists you’ve created on the site, with more features are on the way. For this example we’re using the mouse and keyboard, but of course you can also get to each section using your remote if you have one. Or you can turn your iPod touch into a Boxee remote too. Amie Street in Boxee To access the Amie Street app, launch Boxee and click on Apps from the main menu. Under the Search Sidebar type in Amie Street and select it from the results field.   Then you can add it to the My Apps section…and double-click on the icon. Click on Start to begin using it. You’ll be be presented with a Welcome screen where you can sign into your account. If you don’t have an account yet, there is also an option to go to the Amie Street site and create one. Enter in your account credentials… Now you’ll be able to access your Library, Playlists, Search for new tunes, and check out your Recommended bands and artists. Hover the pointer over an album to get a bit more info about it such as the music genre. You’ll be able to play the songs from the playlists you created on the Amie Street site. You can browse through the history of the music you’ve played as well. Not all the features of this app seem to work as you’d expect them to, and some of the features are not yet available like the Browse feature.   Conclusion At the time of this writing we weren’t able to purchase music or get additional information about the artists. As development continues on Boxee and this app, you can expect more of a full user experience and the ability to purchase music. Even though some of the features are a bit buggy or not available, if you’re a Boxee user and a fan of Amie Street, this is cool app to add to your collection. Download Boxee for Windows, Mac, and Ubuntu Learn more about Amie Street on Boxee Similar Articles Productive Geek Tips Amie Street Downloader Makes Purchasing Music EasierFind Free or Cheap Indie Music at Amie StreetIntegrate Boxee with Media Center in Windows 7Using Pandora in BoxeeGetting Started with Boxee TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Watch World Cup Online On These Sites Speed Up Windows With ReadyBoost Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like)

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >