Search Results

Search found 21644 results on 866 pages for 'connection speed'.

Page 718/866 | < Previous Page | 714 715 716 717 718 719 720 721 722 723 724 725  | Next Page >

  • Installation issue after 4 attempts

    - by SixTen
    Have successfully installed Ubuntu 12.10 on 2 laptops, one running Vista & one running Windows8. Made 4 attempts (long downloads of WUBI.EXE) to different HD's & still NO GO. The machine is an older machine with Windows2000Professional installed & running. The system has 3 hard drives; C:(20.5 Gb with 7.26Gb Free), D: (74.5 Gb with 33.1 Gb Free), & E: (35.3 Gb with 24.8 Gb Free) which all have Gigabytes space available; also an A: 3 1/2floppy drive and a CD-drive burner. The CPU processor is older but seems sufficient: AMD Athlon XP 1700+ and the task manager of Windows2000 shows the processor works fine.. The flat-screen display works fine. Here is the error message I receive each time the 'installation configuration' is verified: "No root file system is defined" "Please correct this from partitioning menu" << The Ubuntu operating system is allowing me a couple of options at the very top right menu. I was able to establish a wireless connection but the MAIN homepage won't load with FIREFOX app or any other apps. I cannot access or even find any 'Partitioning Menu" from the displayed page. I cannot access files or Windows Explorer to view drives since I'm not using the Windows O/S. If I try to go back & re-install the UBUNTU 12.10 again, it always asks me to UNINSTALL the one found on the HD & then I run WUBI.EXE again which takes a long time for the download. Do I need to go back into Windows2000 & use Windows Explorer to look at the file structure & add a partition? On previous attempts I have tried loading the WUBI.EXE on all 3 HD's C: D: & E: Sure is frustrating?? Thanks for any suggestions. NEW UBUNTU user & what I've seen so far I like.. (J.R.)

    Read the article

  • WebLogic Partner Community Newsletter June 2013

    - by JuergenKress
    Dear WebLogic Partner Community member, This month newsletter contains tons of Java material with the availability of Java EE7! Including many articles in the May/June Java Magazine and the Duke Award nomination! Thanks for all your efforts to become certified and Specialized. For all the experts who achieved the WebLogic Server 12c Certified Implementation Specialist  or ADF 11g Certified Implementation Specialist you can download a logo for your blog or business card at the Competence Center. For all the companies who achieved a WebLogic and ADF Specialization you can request a nice plaque for your office. Summer time is training time – make sure you attend our Fusion Middleware Summer Camps or one of our upcoming Exalogic Implementation Workshop by PTS in Spain, Turkey and Middle East. For those who can not make it we offers plenty of online courses like the New ADF Academy. Or the webcast The value of Engineered Systems for SAP. At our WebLogic Community Workspace (WebLogic Community membership required) you can find Engineered Systems Strategy presentation. Thanks to the community for all the WebLogic best practice papers and tools like Automated provision of Oracle Weblogic Server Platform & Frank’s Coherence videos on YouTube & Create and deploy applications on the Oracle Java Cloud & WebLogic Application redeployment using shared libraries - without downtime & Oracle Traffic Director. The first tests deployments for WebLogic on Oracle Database Appliance are on the way, thanks to all who are their experience: Sizing & Configuration and Traffic Director. Virtualised Oracle Database Appliance Proof of Concept - #1 Planning from Simon Haslam. Would be great if you can also share your experience via twitter @wlscommunity! In the ADF section of the newsletter you find Tuning Application Module Pools and Connection Pools & List View - Cool Looking ADF PS6 Component for Collections & Insider Essential & User Interface & Skinning & Oracle Forms to ADF Modernization reference. Great summer time! Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsJune2013 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • What do you think of the EntLib 5.0 configuration tool?

    Hello again! Its been a while, I know. Ive been busy over the last few months with several projects, some of them software related, and one of them human my son Jesse was born on 26 February 2010. Fun times! Meanwhile, back in Redmond, the p&p team has been busy working on Enterprise Library 5.0 see Grigoris announcement for details on the beta. Theres a ton of new stuff in this release, but theres one big new feature that hasnt received a lot of attention that Im keen to hear your perspectives on. The change is the biggest overhaul to the configuration tool since Enterprise Library was launched. If you havent yet grabbed the EntLib 5.0 beta, heres a before and after shot of the config tool: Enterprise Library 4.1 config tool Enterprise Library 5.0 (beta 1) config tool The tool has been rebuilt from the ground up in response to some feedback and usability studies from the previous version of the tool. But is this a step in the right direction? Id love to hear what you think. If youve downloaded EntLib 5.0 and tried out the tool, please share your thoughts on: First impressions. Is the tool easy to understand? Easy to find what youre looking for? Easy to read existing configuration? Pretty? Ease of use for real life tasks. Rather than make up your own tasks, here are a few sample scenarios you might want to try: Configure the data access block with a SQL Server connection called Audit that points to a database called Audit on a server called DB Configure the logging block so that any log entries in the Audit category are written to both the Event Log and the Audit database (see above) Configure the validation block with a ruleset called Email Address that uses an appropriate regular expression for e-mail addresses Configure the policy injection block such that any calls to classes in the MyCompany.Security namespace are logged before and after the call using the Audit category (see above) Comparison with the old config tool. What do you like better in the new tool? What did you like better in the old tool? How do you rate your level of expertise using the old tool? Keep in mind that I no longer work in the p&p team, so I cant say how any of this feedback will be used (although Im sure the team is listening!). However since Ive invested so much time in Enterprise Library, both in leading the team and using the product on real projects Im very interested to hear what you all think of the tools new direction.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • An update on using Rosetta Stone: Studio now isn't very useful and is not great value as an add-on option

    - by Greg Low
    I had a surprisingly large number of responses from my previous posting about learning Chinese. An update for those considering Rosetta Stone (www.rosettastone.com) for Chinese, Spanish or any other language that they offer:I had to renew my "Studio" subscription today and it's now a much worse deal than it was.It's now $75 for 6 months for Studio sessions. Online classes used to be 45 mins. Recently they reduced them to 20 mins. Given how often people have connection issues, etc. that 20 mins can disappear very quickly.They've also reduced the number you can attend. You used to be able to have 2 scheduled at any point in time. Now they limit you to 2 "group sessions" per month during the period. (You can pay for additional private sessions). The combination of these two changes now makes it much less useful. Two x 20 min sessions per month is an almost meaningless amount of practice. They also now automatically change you to auto-renew when you subscribe. They tell you where to remove this auto-renewal but the first 4 or 5 times that I went into that screen, no such option appeared. Later, an option did appear and I used it.Overall, things just aren't what they used to be at Rosetta Stone. It's now pretty hard to recommend the Studio option where it was a no-brainer before.FURTHER UPDATE: <sigh>Even after I renewed, I could not even connect to their "new" service. Although the system processed the renewal, it still tells me it's expired. My online chat person "Siva S" tells me that the problem is that I've purchased all 5 levels of the program. I can't wait till they explain to me how making an extra purchase from them stops me from logging on. Siva told me that they had "renewed" the program. I'd have to speak to Customer Care; they aren't available and then disconnected himself. Impressive (not).Their website is now full of issues too. It insists that my billing address is in the USA, even though it pretends to accept changes to it.Overall, it's gone from something that could be recommended (with some limitations) to now being an app to avoid. That's a pity as I liked much of it before.

    Read the article

  • Mixing It Up with BluesMix

    - by Oracle OpenWorld Blog Team
    By Karen Shamban At home base in London prior to making a swing on the US west coast later this month, BluesMix took a few minutes to answer some musical questions. Q: What are the top three things people should know about your music? A: We focus on original material and blend funk with blues. We're big on songwriting but also performance, groove, and feel of the music. It's music you can dance to! We're from London, England and have been labeled 'one of the UK's leading blues/funk bands'. Oh - that's four things! :) Q: Do you prefer smaller, intimate venues or larger, louder ones? A: Actually both, for different reasons. We play many intimate club shows in London at prestigious venues such as the 100 Club. There's lots of musical history with these types of clubs where the likes of the Rolling Stones used to play week-in week-out in the '60s. Usually these shows generally have a fantastic atmosphere, with a close connection to the audience, who are packed close to the stage. They often turn up surprises too…for example, we've had artists such as Amy Winehouse and Mick Abrahams in the crowd enjoying the show and then asking to come onstage and play with the band. Lots of fun! The larger venues are great too, in a different way. We've played to 3,000-person+ crowds and the atmosphere with so many people enjoying the show is a real buzz. It's also nice to play outdoor venues, especially in places with nice weather like California! Q: What's new and different in the music you are playing today, versus a year or two ago? A: Well, we released a new album earlier this year. It's called Flat Nine; it's on the Proper Records label. Whilst our music has always been a blend of blues and vintage funk, this album in particular has evolved our funk side even further. We've received some really great reviews from the music press in the UK and had generous comparisons to the likes of The Meters, Dr. John, The Average White Band, Howlin' Wolf. The album has generated lots of interest, which is fantastic. We're playing to regular sellout shows in the UK and are also opening for some legends of the funk music scene, such as The New Mastersounds. BluesMix are headlining the Oracle OpenWorld Welcome Reception in Yerba Buena Gardens on Sunday, September 30 and are playing at the Oracle OpenWorld Music Festival at Slim's on Tuesday, October 2. More on the music: Oracle OpenWorld Music Festival BluesMix  >>

    Read the article

  • Windows Azure Diagnostics: Next to Useless?

    - by Your DisplayName here!
    To quote my good friend Christian: “Tracing is probably one of the most discussed topics in the Windows Azure world. Not because it is freaking cool – but because it can be very tedious and partly massively counter-intuitive.” <rant> The .NET Framework has this wonderful facility called TraceSource. You define a named trace and route that to a configurable listener. This gives you a lot of flexibility – you can create a single trace file – or multiple ones. There is even nice tooling around that. SvcTraceViewer from the SDK let’s you open the XML trace files – you can filter and sort by trace source and event type, aggreate multiple files…blablabla. Just what you would expect from a decent tracing infrastructure. Now comes Windows Azure. I was already were grateful that starting with the SDK 1.2 we finally had a way to do tracing and diagnostics in the cloud (kudos!). But the way the Azure DiagnosticMonitor is currently implemented – could be called flawed. The Azure SDK provides a DiagnosticsMonitorTraceListener – which is the right way to go. The only problem is, that way this works is, that all traces (from all sources) get written to an ETW trace. Then the DiagMon listens to these traces and copies them periodically to your storage account. So far so good. But guess what happens to your nice trace files: the trace source names get “lost”. They appear in your message text at the end. So much for filtering and sorting and aggregating (regex #fail or #win??). Every trace line becomes an entry in a Azure Storage Table – the svclog format is gone. So much for the existing tooling. To solve that problem, one workaround was to write your own trace listener (!) that creates svclog files inside of local storage and use the DiagMon to copy those. Christian has a blog post about that. OK done that. Now it turns out that this mechanism does not work anymore in 1.3 with FullIIS (see here). Quoting: “Some IIS 7.0 logs not collected due to permissions issues...The root cause to both of these issues is the permissions on the log files.” And the workaround: “To read the files yourself, log on to the instance with a remote desktop connection.” Now then have fun with your multi-instance deployments…. </rant>

    Read the article

  • Connect Team Foundation Service/TFS 2012 with Visual Studio 2010 &amp; Visual Studio 2008

    - by Vishal
    Hello, Microsoft finally released the Team Foundation Service in late October 2012 after its long time in the preview phase. I was already using the TFS Preview which was free but I was happy to see Microsoft releasing the Team Foundation Service also FREE for upto 5 users. Isn't that great news? I know there are bunch of other free source control repositories (Github, Bitbucket, SVN etc.) out there but I somehow like TFS better. Also the other good thing about the final release was that I didn’t had to do any kind of migration of my code from preview to final release version. Just changed the TFS connection URL and it worked like a charm. Anyways, if you are a startup with small team and need some awesome Source Control along with all the good Project Management, Continuous Integration (Build, Test, Deploy), Team Collaboration, Agile/Scrum planning etc. features than Team Foundation Service is your answer. Microsoft has not yet released their pricing for more than 5 users and will be releasing it sometime in early 2013. What if as of now you have a team more than 5 users and you want to use Team Foundation Service, the good news is you can use it for FREE but when they release the final pricing, you will have to transition to the paid plan. Lot of story, getting to the point, connecting to Team Foundation Service with Visual Studio 2012 is straight forward and would work out of the box but it wont for previous versions of Visual Studio. You will have to upgrade to the latest service pack first and than install the forward compatibility pack. (1st : Service Packs & 2nd: Forward Compatibility packs) For Visual Studio 2010: Visual Studio 2010 Service Pack 1. Visual Studio 2010 forward compatibility for TFS 2012 and Team Foundation Service.         For Visual Studio 2008: Visual Studio 2008 Service Pack 1. Visual Studio 2008 forward compatibility for TFS 2012 & Team Foundation Service. Restart your system. Visual Studio 2008 will not work if you only put https://xxx.visualstudio.com. You will have to put your collection name too as shown below.       By the way, it doesn’t matter if you are an Apple Application Developer or Android App Developer, you can still use Team Foundation Service as your source control. Below are few links to connect to Team Foundation Service with other IDEs: Connect Eclipse to Team Foundation Service. Connect XCode to Team Foundation Service. Happy coding. Vishal Mody

    Read the article

  • Bluetooth firmware problem in Ubuntu 13.04

    - by chanzerre
    I have a [Dell Inspiron][1] 15R 5520 laptop. Bluetooth is not working at all. rfkill list all gives 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no 2: brcmwl-0: Wireless LAN Soft blocked: no Hard blocked: no dmesg|grep -i bluetooth gives [ 13.644428] Bluetooth: Core ver 2.16 [ 13.644445] Bluetooth: HCI device and connection manager initialized [ 13.644453] Bluetooth: HCI socket layer initialized [ 13.644455] Bluetooth: L2CAP socket layer initialized [ 13.644461] Bluetooth: SCO socket layer initialized [ 15.861363] Bluetooth: hci0 command 0x1003 tx timeout [ 15.903443] Bluetooth: can't load firmware, may not work correctly [ 17.332535] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 17.332538] Bluetooth: BNEP filters: protocol multicast [ 17.332544] Bluetooth: BNEP socket layer initialized [ 17.393768] Bluetooth: RFCOMM TTY layer initialized [ 17.393781] Bluetooth: RFCOMM socket layer initialized [ 17.393783] Bluetooth: RFCOMM ver 1.11 hciconfig gives hci0: Type: BR/EDR Bus: USB BD Address: E0:06:E6:D5:DB:46 ACL MTU: 1021:8 SCO MTU: 64:1 UP RUNNING PSCAN ISCAN RX bytes:687 acl:0 sco:0 events:56 errors:0 TX bytes:2024 acl:0 sco:0 commands:52 errors:0 I have visited the site http://wireless.kernel.org/en/users/Drivers/b43 and according to it lspci -vnn -d 14e4: gives 08:00.0 Network controller [0280]: Broadcom Corporation BCM43142 802.11b/g/n [14e4:4365] (rev 01) Subsystem: Dell Wireless 1704 802.11n + BT 4.0 [1028:0016] Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at c1500000 (64-bit, non-prefetchable) [size=32K] Capabilities: <access denied> Kernel driver in use: wl So I got my PCI-ID as 14e4:4365 which it says is not supported. The alternative is wl. What should I do? My Wi-Fi is working normally without any problems, but Bluetooth is not working. sudo dpkg -i wireless-bcm43142-dkms_6.20.55.19-1_amd64.deb gives following error (Reading database ... 208543 files and directories currently installed.) Unpacking wireless-bcm43142-dkms (from wireless-bcm43142-dkms_6.20.55.19-1_amd64.deb) ... Setting up wireless-bcm43142-dkms (6.20.55.19-1) ... Loading new wireless-bcm43142-6.20.55.19 DKMS files... Building only for 3.8.0-23-generic Building initial module for 3.8.0-23-generic Traceback (most recent call last): File "/usr/share/apport/package-hooks/dkms_packages.py", line 22, in <module> import apport ImportError: No module named apport Error! Bad return status for module build on kernel: 3.8.0-23-generic (x86_64) Consult /var/lib/dkms/wireless-bcm43142/6.20.55.19/build/make.log for more information.

    Read the article

  • Simple method for reliably detecting code in text?

    - by Jeff Atwood
    GMail has this feature where it will warn you if you try to send an email that it thinks might have an attachment. Because GMail detected the string see the attached in the email, but no actual attachment, it warns me with an OK / Cancel dialog when I click the Send button. We have a related problem on Stack Overflow. That is, when a user enters a post like this one: my problem is I need to change the database but I don't won't to create a new connection. example: DataSet dsMasterInfo = new DataSet(); Database db = DatabaseFactory.CreateDatabase("ConnectionString"); DbCommand dbCommand = db.GetStoredProcCommand("uspGetMasterName"); This user did not format their code as code! That is, they didn't indent by 4 spaces per Markdown, or use the code button (or the keyboard shortcut ctrl+k) which does that for them. Thus, our system is accreting a lot of edits where people have to go in and manually format code for people that are somehow unable to figure this out. This leads to a lot of bellyaching. We've improved the editor help several times, but short of driving over to the user's house and pressing the correct buttons on their keyboard for them, we're at a loss to see what to do next. That's why we are considering a Google GMail style warning: Did you mean to post code? You wrote stuff that we think looks like code, but you didn't format it as code by indenting 4 spaces, using the toolbar code button or the ctrl+k code formatting command. However, presenting this warning requires us to detect the presence of what we think is unformatted code in a question. What is a simple, semi-reliable way of doing this? Per Markdown, code is always indented by 4 spaces or within backticks, so anything correctly formatted can be discarded from the check immediately. This is only a warning and it will only apply to low-reputation users asking their first questions (or providing their first answers), so some false positives are OK, so long as they are about 5% or less. Questions on Stack Overflow can be in any language, though we can realistically limit our check to, say, the "big ten" languages. Per the tags page that would be C#, Java, PHP, JavaScript, Objective-C, C, C++, Python, Ruby. Use the Stack Overflow creative commons data dump to audit your potential solution (or just pick a few questions in the top 10 tags on Stack Overflow) and see how it does. Pseudocode is fine, but we use c# if you want to be extra friendly. The simpler the better (so long as it works). KISS! If your solution requires us to attempt to compile posts in 10 different compilers, or an army of people to manually train a bayesian inference engine, that's ... not exactly what we had in mind.

    Read the article

  • How to install Network Adapter Drivers for Atheros AR8161/8165 PCI-E Gigabit Ethernet Controller (NDIS 6.20) Ubuntu 12.04

    - by Jessica Burnett
    How can I install drivers for 64-bit Atheros AR8161/8165 PCI-E Gigabit Ethernet Controller (NDIS 6.20) for Ubuntu 12.04. I dual boot Windows7/Ubuntu 12.04 drivers work for 64-bit Windows 7. lspic -nn: 00:00.0 Host bridge [0600]: Intel Corporation Ivy Bridge DRAM Controller [8086:0154] (rev 09) 00:01.0 PCI bridge [0604]: Intel Corporation Ivy Bridge PCI Express Root Port [8086:0151] (rev 09) 00:02.0 VGA compatible controller [0300]: Intel Corporation Ivy Bridge Graphics Controller [8086:0166] (rev 09) 00:14.0 USB controller [0c03]: Intel Corporation Panther Point USB xHCI Host Controller [8086:1e31] (rev 04) 00:16.0 Communication controller [0780]: Intel Corporation Panther Point MEI Controller #1 [8086:1e3a] (rev 04) 00:1a.0 USB controller [0c03]: Intel Corporation Panther Point USB Enhanced Host Controller #2 [8086:1e2d] (rev 04) 00:1b.0 Audio device [0403]: Intel Corporation Panther Point High Definition Audio Controller [8086:1e20] (rev 04) 00:1c.0 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 1 [8086:1e10] (rev c4) 00:1c.1 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 2 [8086:1e12] (rev c4) 00:1c.3 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 4 [8086:1e16] (rev c4) 00:1d.0 USB controller [0c03]: Intel Corporation Panther Point USB Enhanced Host Controller #1 [8086:1e26] (rev 04) 00:1f.0 ISA bridge [0601]: Intel Corporation Panther Point LPC Controller [8086:1e59] (rev 04) 00:1f.2 SATA controller [0106]: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] [8086:1e03] (rev 04) 00:1f.3 SMBus [0c05]: Intel Corporation Panther Point SMBus Controller [8086:1e22] (rev 04) 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:0de9] (rev a1) 02:00.0 Ethernet controller [0200]: Atheros Communications Inc. AR8161 Gigabit Ethernet [1969:1091] (rev 08) 03:00.0 Network controller [0280]: Intel Corporation Centrino Wireless-N 2200 [8086:0891] (rev c4) 04:00.0 System peripheral [0880]: JMicron Technology Corp. SD/MMC Host Controller [197b:2392] (rev 30) 04:00.2 SD Host controller [0805]: JMicron Technology Corp. Standard SD Host Controller [197b:2391] (rev 30) 04:00.3 System peripheral [0880]: JMicron Technology Corp. MS Host Controller [197b:2393] (rev 30) 04:00.4 System peripheral [0880]: JMicron Technology Corp. xD Host Controller [197b:2394] (rev 30) sudo lshw -c network *-network UNCLAIMED description: Ethernet controller product: AR8161 Gigabit Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:02:00.0 version: 08 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi msix bus_master cap_list configuration: latency=0 resources: memory:d3a00000-d3a3ffff ioport:2000(size=128) *-network description: Wireless interface product: Centrino Wireless-N 2200 vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: c4 serial: 9c:4e:36:14:d4:7c width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-23-generic firmware=18.168.6.1 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:45 memory:d3900000-d3901fff I also tried Manually configuring wired connection. Nether wired or wireless connects

    Read the article

  • Backup Azure Tables with the Enzo Backup API

    - by Herve Roggero
    In case you missed it, you can now backup (and restore) Azure Tables and SQL Databases using an API directly. The features available through the API can be found here: http://www.bluesyntax.net/backup20api.aspx and the online help for the API is here: http://www.bluesyntax.net/EnzoCloudBackup20/APIIntro.aspx. Backing up Azure Tables can’t be any easier than with the Enzo Backup API. Here is a sample code that does the trick: // Create the backup helper class. The constructor automatically sets the SourceStorageAccount property StorageBackupHelper backup = new StorageBackupHelper("storageaccountname", "storageaccountkey", "sourceStorageaccountname", "sourceStorageaccountkey", true, "apilicensekey"); // Now set some properties… backup.UseCloudAgent = false;                                       // backup locally backup.DeviceURI = @"c:\TMP\azuretablebackup.bkp";    // to this file backup.Override = true; backup.Location = DeviceLocation.LocalFile; // Set optional performance options backup.PKTableStrategy.Mode = BSC.Backup.API.TableStrategyMode.GUID; // Set GUID strategy by default backup.MaxRESTPerSec = 200; // Attempt to stay below 200 REST calls per second // Start the backup now… string taskId = backup.Backup(); // Use the Environment class to get the final status of the operation EnvironmentHelper env = new EnvironmentHelper("storageaccountname", "storageaccountkey", "apilicensekey"); string status = env.GetOperationStatus(taskId);   As you can see above, the code is straightforward. You provide connection settings in the constructor, set a few options indicating where the backup device will be located, set optional performance parameters and start the backup. The performance options are designed to help you backup your Azure Tables quickly, while attempting to keep under a specific threshold to prevent Storage Account throttling. For example, the MaxRESTPerSec property will attempt to keep the overall backup operation under 200 rest calls per second. Another performance option if the Backup Strategy for Azure Tables. By default, all tables are simply scanned. While this works best for smaller Azure Tables, larger tables can use the GUID strategy, which will issue requests against an Azure Table in parallel assuming the PartitionKey stores GUID values. It doesn’t mean that your PartitionKey must have GUIDs however for this strategy to work; but the backup algorithm is tuned for this condition. Other options are available as well, such as filtering which columns, entities or tables are being backed up. Check out more on the Blue Syntax website at http://www.bluesyntax.net.

    Read the article

  • Memory Glutton

    - by AreYouSerious
    I have to admit that I can't get enough storage. I have hard drives just sitting around in case I need to move somthing, or I'm going to a friends and either they want something I have or I want something they might have. What I'm going to talk about today is cost effective memory for devices. I don't know how this particualr device will work in a camera, as That's not what I use in my camera, in fact I don't have a camera that doesn't either use SD, or the old compact flash card, that's not so compact anymore. There's this thing that uses two micro sd cards to double the capacity of your memory, and it costs about 4 bucks, without the Micro SD card. I have had one for about a year and was going to throw it away because I couldn't get it to work with my computer, or with my Sony Reader. However I found out by one last ditch effort that this thing works beautifully with my Sony PSP. there is no software to speak of associated with this thing, you simply put in two SD cards of the same size... (if you put in two different sizes it will still work, you'll only double the smallest cards size though) and format through the psp. Viola you know have a 29 GB memory card for your PSP. why is this important ? well for starters you can carry more music and more videos. Second if you have gone the way of the hacker.... you can store more games on your card... There are just a few things you have to note.... I speak from experience... you have to use the usb connection to the PSP to do any file moving, as I said previously said card doesn't play well with my computers or card readers... I not saying it won't work at all, just hasn't work with anything I own. Second. If for some reason you try to Hack/crack your PSP don't attempt to delete a game from the psp, use the usb file browser to remove games. if you delete from the PSP you are likely to have to move all your files off, reformat and start again... just a couple things I have noticed... if I had done something like that.   anyway, Here's a link.... http://www.photofast-adapter.com/  and if you want to buy one, get it off ebay, I've seen them as low as $1.99

    Read the article

  • Getting WLAN on my Laptop to work (Medion MD98300)

    - by Anand Böhmer
    Dear Ubuntu Community, I am having difficulties to get the WLAN Adapter on my Medion 98300 Laptop to work. The WLAN Card seems to be connected through an internal USB Interface and the Card itself had shown up as a wirelles Network while installing Ubuntu. I have tried a few things earlier, but none of my google reasearches have brought me to a working solution... I am quite new to the Linux System and only knoew a couple of terminal commands so far, so I probably have missed out on a few possible solutions. Maybe you can help me? Thank you very much in advance! A fre minimal technical Details: AMD Turion 64 X2 Dual-Core Mobile Technolgie TL-50 NVIDIA GeForce Go 6150 SanDisk 64GB SSD 2GB RAM DDR2 667 nForce Chipset (I forgt the Version, but deductable from the GPU I guess) WiFi: ZyDAS ZD1211B 802.11g Thank's a lot again! :) UPDATE: I tried around myself a little and found a guide on the Linux Mint forums that helped! I already had tried to install the linux backport modules etc. What I finally did was update the linux firmware and run the following command: echo "options acer_wmi wirelles=1" sudo tee /etc/modprobe.d/acer_wmi.conf and rebooted now I found and could connect to networks, but unfortunately I found, that the link quality was very bad, around 40 to 50. Eventhough my Router is running at high power and is only 6 Meters away! I then switched a few channels, but that did not improve much. Before, under Windows, I had a very good link quality and had the entire 16mbit/s internet connection at disposal, now I can only get about 3-5Mbit/s. better then nothing, but still pretty bad! The "TX power" is fixed on 20dBm and iwconfig says, that the "Power Management" is off... Maybe the power of the module is set too low?... UPDATE2: I figured that 20dBm is a normal power output. I even tried to change the power using iwconfig wlan0 txpower INTEGERHERE but, obviously my "Card" does not support more then 20. More would probably be illegal as well, so I won't even use more then 20. I guess that I will have to figure out a way, or maybe just switch cards. Are the Mailboard-USB-Connectors on a laptop of the same property as the standard external ones? If so, I could simply solder a micro Wirelless N Adapter onto the board :)

    Read the article

  • How to build a 4x game?

    - by Marco
    I'm trying to study how succefully implement a 4x game. Area of interest: 1) map data: how to store stellars systems (graphs?), how to generate them and so on.. 2) multiplayer: how to organize code in a non graphical server and a client to display it 3) command system: what are patters to catch user and ai decisions and handle them, adding at first "explore" and "colonize" then "combat", "research", "spy" and so on (commands can affect ships, planets, research, etc..) 4) ai system: ai can use commands to expand, upgrade planets and ship I know is a big questions, so help is appreciated :D 1) Map data Best choice is have a graph to model a galaxy. A node is a stellar system and every system have a list of planets. Ship cannot travel outside of predefined paths, like in Ascendancy: http://www.abandonia.com/files/games/221/Ascendancy_2.png Every connection between two stellar systems have a cost, in turns. Generate a galaxy is only a matter of: - dimension: number of stellar systems, - variety: randomize number of planets and types (desertic, earth, etc..), - positions of each stellar system on game space - connections: assure that exist a path between every node, so graph is "connected" (not sure if this a matematically correct term) 2) Multiplayer Game is organized in turns: player 1, player 2, ai1, ai2. Server take care of all data and clients just diplay it and collect data change. Because is a turn game, latency is not a problem :D 3) Command system I would like to design a hierarchy of commands to take care of this aspect: abstract Genericcommand (target) ExploreCommand (Ship) extends genericcommand colonizeCommand (Ship) buildcommand(planet, object) and so on. In my head all this commands are stored in a queue for every planets, ships or reasearch center or spy, and each turn a command is sent to a server to apply command and change data state 4) ai system I don't have any idea about this. Is a big topic and what I want is a simple ai. Something like "expand and fight against everyone". I think about a behaviour tree to control ai moves, so I can develop an ai that try to build ships to expand and then colonize planets, upgrade them throught science and combat enemies. Could be done with a finite state machine too ? any ideas, resources, article are welcome!

    Read the article

  • Setting up Ubuntu Server as a Router with DHCPD and 3 Ethernet devices

    - by cengbrecht
    My configuration: Ubuntu 12.04 DHCP3-server eth0, eth1, eth2 Edit: removed br0&br1 eth0 is the external connection eth1 & eth2 are the internal network eth1 and eth2 are supposed to be seperate networks of student/teachers respectivly. What I would like to have is the internet from external device bridged to device 1 and 2, with the DHCP server controlling the two internal devices. Its already working with DHCP, the part I am stuck on is bridging for internet. I have setup a script that I found here: Router With the original script he linked here: Ubuntu Router Guide echo -e "\n\nLoading simple rc.firewall-iptables version $FWVER..\n" IPTABLES=/sbin/iptables #IPTABLES=/usr/local/sbin/iptables DEPMOD=/sbin/depmod MODPROBE=/sbin/modprobe EXTIF="eth0" INTIF="eth1" INTIF2="eth2" echo " External Interface: $EXTIF" echo " Internal Interface: $INTIF" echo " Internal Interface: $INTIF2" EXTIP=`ifconfig $EXTIF | grep 'inet addr:' | sed 's#.*inet addr\:\([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\).*#\1#g'` echo " External IP: $EXTIP" #====================================================================== #== No editing beyond this line is required for initial MASQ testing == The rest of the script below this is as is. I can get ip from the eth1 & eth2 devices, and my computer can see them, and them it, however, internet is not being passed through. If you need more information please just let me know. EDIT: So I had a 255.255.254.0 network, I believe that was causing the issue. Not sure if it will matter on the second card, I will test later. After changing the subnet to 255.255.255.0 the pings will pass through, however, I cannot get DNS requests to pass? My new Config for Firewall Rules # /etc/iptables.up.rules # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *mangle :PREROUTING ACCEPT [39:4283] :INPUT ACCEPT [39:4283] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [12:4884] :POSTROUTING ACCEPT [13:5145] COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A FORWARD -j LOG -A FORWARD -m state -i eth1 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth2 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth1 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth2 --state NEW,ESTABLISHED,RELATED -j ACCEPT COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *nat :INPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE -A POSTROUTING -o eth0 -j SNAT --to-source 192.168.1.25 COMMIT # Completed on Wed Nov 28 19:43:28 2012 Not sure what else you may need, but I am using Webmin to control the server(Needed for the operators on site to know how to use it.) If you could explain it as standard CLI commands, or edits to this file directly then we should be ok. :) And thanks again Erik, I do believe your edits did help.

    Read the article

  • How do we keep dependent data structures up to date?

    - by Geo
    Suppose you have a parse tree, an abstract syntax tree, and a control flow graph, each one logically derived from the one before. In principle it is easy to construct each graph given the parse tree, but how can we manage the complexity of updating the graphs when the parse tree is modified? We know exactly how the tree has been modified, but how can the change be propagated to the other trees in a way that doesn't become difficult to manage? Naturally the dependent graph can be updated by simply reconstructing it from scratch every time the first graph changes, but then there would be no way of knowing the details of the changes in the dependent graph. I currently have four ways to attempt to solve this problem, but each one has difficulties. Nodes of the dependent tree each observe the relevant nodes of the original tree, updating themselves and the observer lists of original tree nodes as necessary. The conceptual complexity of this can become daunting. Each node of the original tree has a list of the dependent tree nodes that specifically depend upon it, and when the node changes it sets a flag on the dependent nodes to mark them as dirty, including the parents of the dependent nodes all the way down to the root. After each change we run an algorithm that is much like the algorithm for constructing the dependent graph from scratch, but it skips over any clean node and reconstructs each dirty node, keeping track of whether the reconstructed node is actually different from the dirty node. This can also get tricky. We can represent the logical connection between the original graph and the dependent graph as a data structure, like a list of constraints, perhaps designed using a declarative language. When the original graph changes we need only scan the list to discover which constraints are violated and how the dependent tree needs to change to correct the violation, all encoded as data. We can reconstruct the dependent graph from scratch as though there were no existing dependent graph, and then compare the existing graph and the new graph to discover how it has changed. I'm sure this is the easiest way because I know there are algorithms available for detecting differences, but they are all quite computationally expensive and in principle it seems unnecessary so I'm deliberately avoiding this option. What is the right way to deal with these sorts of problems? Surely there must be a design pattern that makes this whole thing almost easy. It would be nice to have a good solution for every problem of this general description. Does this class of problem have a name?

    Read the article

  • Partner Blog Series: PwC Perspectives - The Gotchas, The Do's and Don'ts for IDM Implementations

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableMediumList1Accent6 {mso-style-name:"Medium List 1 - Accent 6"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:65; mso-style-unhide:no; border-top:solid #E0301E 1.0pt; mso-border-top-themecolor:accent6; border-left:none; border-bottom:solid #E0301E 1.0pt; mso-border-bottom-themecolor:accent6; border-right:none; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Georgia","serif"; color:black; mso-themecolor:text1; mso-ansi-language:EN-GB;} table.MsoTableMediumList1Accent6FirstRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:cell-none; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; font-family:"Verdana","sans-serif"; mso-ascii-font-family:Georgia; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Georgia; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi;} table.MsoTableMediumList1Accent6LastRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; color:#968C6D; mso-themecolor:text2; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6FirstCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-column; mso-style-priority:65; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6LastCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6OddColumn {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} table.MsoTableMediumList1Accent6OddRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableMediumList1Accent6 {mso-style-name:"Medium List 1 - Accent 6"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:65; mso-style-unhide:no; border-top:solid #E0301E 1.0pt; mso-border-top-themecolor:accent6; border-left:none; border-bottom:solid #E0301E 1.0pt; mso-border-bottom-themecolor:accent6; border-right:none; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Georgia","serif"; color:black; mso-themecolor:text1; mso-ansi-language:EN-GB;} table.MsoTableMediumList1Accent6FirstRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:cell-none; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; font-family:"Arial Narrow","sans-serif"; mso-ascii-font-family:Georgia; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Georgia; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi;} table.MsoTableMediumList1Accent6LastRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; color:#968C6D; mso-themecolor:text2; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6FirstCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:first-column; mso-style-priority:65; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6LastCol {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:last-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-border-top:1.0pt solid #E0301E; mso-tstyle-border-top-themecolor:accent6; mso-tstyle-border-bottom:1.0pt solid #E0301E; mso-tstyle-border-bottom-themecolor:accent6; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableMediumList1Accent6OddColumn {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-column; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} table.MsoTableMediumList1Accent6OddRow {mso-style-name:"Medium List 1 - Accent 6"; mso-table-condition:odd-row; mso-style-priority:65; mso-style-unhide:no; mso-tstyle-shading:#F7CBC7; mso-tstyle-shading-themecolor:accent6; mso-tstyle-shading-themetint:63;} It is generally accepted among business communities that technology by itself is not a silver bullet to all problems, but when it is combined with leading practices, strategy, careful planning and execution, it can create a recipe for success. This post attempts to highlight some of the best practices along with dos & don’ts that our practice has accumulated over the years in the identity & access management space in general, and also in the context of R2, in particular. Best Practices The following section illustrates the leading practices in “How” to plan, implement and sustain a successful OIM deployment, based on our collective experience. Planning is critical, but often overlooked A common approach to planning an IAM program that we identify with our clients is the three step process involving a current state assessment, a future state roadmap and an executable strategy to get there. It is extremely beneficial for clients to assess their current IAM state, perform gap analysis, document the recommended controls to address the gaps, align future state roadmap to business initiatives and get buy in from all stakeholders involved to improve the chances of success. When designing an enterprise-wide solution, the scalability of the technology must accommodate the future growth of the enterprise and the projected identity transactions over several years. Aligning the implementation schedule of OIM to related information technology projects increases the chances of success. As a baseline, it is recommended to match hardware specifications to the sizing guide for R2 published by Oracle. Adherence to this will help ensure that the hardware used to support OIM will not become a bottleneck as the adoption of new services increases. If your Organization has numerous connected applications that rely on reconciliation to synchronize the access data into OIM, consider hosting dedicated instances to handle reconciliation. Finally, ensure the use of clustered environment for development and have at least three total environments to help facilitate a controlled migration to production. If your Organization is planning to implement role based access control, we recommend performing a role mining exercise and consolidate your enterprise roles to keep them manageable. In addition, many Organizations have multiple approval flows to control access to critical roles, applications and entitlements. If your Organization falls into this category, we highly recommend that you limit the number of approval workflows to a small set. Most Organizations have operations managed across data centers with backend database synchronization, if your Organization falls into this category, ensure that the overall latency between the datacenters when replicating the databases is less than ten milliseconds to ensure that there are no front office performance impacts. Ingredients for a successful implementation During the development phase of your project, there are a number of guidelines that can be followed to help increase the chances for success. Most implementations cannot be completed without the use of customizations. If your implementation requires this, it’s a good practice to perform code reviews to help ensure quality and reduce code bottlenecks related to performance. We have observed at our clients that the development process works best when team members adhere to coding leading practices. Plan for time to correct coding defects and ensure developers are empowered to report their own bugs for maximum transparency. Many organizations struggle with defining a consistent approach to managing logs. This is particularly important due to the amount of information that can be logged by OIM. We recommend Oracle Diagnostics Logging (ODL) as an alternative to be used for logging. ODL allows log files to be formatted in XML for easy parsing and does not require a server restart when the log levels are changed during troubleshooting. Testing is a vital part of any large project, and an OIM R2 implementation is no exception. We suggest that at least one lower environment should use production-like data and connectors. Configurations should match as closely as possible. For example, use secure channels between OIM and target platforms in pre-production environments to test the configurations, the migration processes of certificates, and the additional overhead that encryption could impose. Finally, we ask our clients to perform database backups regularly and before any major change event, such as a patch or migration between environments. In the lowest environments, we recommend to have at least a weekly backup in order to prevent significant loss of time and effort. Similarly, if your organization is using virtual machines for one or more of the environments, it is recommended to take frequent snapshots so that rollbacks can occur in the event of improper configuration. Operate & sustain the solution to derive maximum benefits When migrating OIM R2 to production, it is important to perform certain activities that will help achieve a smoother transition. At our clients, we have seen that splitting the OIM tables into their own tablespaces by categories (physical tables, indexes, etc.) can help manage database growth effectively. If we notice that a client hasn’t enabled the Oracle-recommended indexing in the applicable database, we strongly suggest doing so to improve performance. Additionally, we work with our clients to make sure that the audit level is set to fit the organization’s auditing needs and sometimes even allocate UPA tables and indexes into their own table-space for better maintenance. Finally, many of our clients have set up schedules for reconciliation tables to be archived at regular intervals in order to keep the size of the database(s) reasonable and result in optimal database performance. For our clients that anticipate availability issues with target applications, we strongly encourage the use of the offline provisioning capabilities of OIM R2. This reduces the provisioning process for a given target application dependency on target availability and help avoid broken workflows. To account for this and other abnormalities, we also advocate that OIM’s monitoring controls be configured to alert administrators on any abnormal situations. Within OIM R2, we have begun advising our clients to utilize the ‘profile’ feature to encapsulate multiple commonly requested accounts, roles, and/or entitlements into a single item. By setting up a number of profiles that can be searched for and used, users will spend less time performing the same exact steps for common tasks. We advise our clients to follow the Oracle recommended guides for database and application server tuning which provides a good baseline configuration. It offers guidance on database connection pools, connection timeouts, user interface threads and proper handling of adapters/plug-ins. All of these can be important configurations that will allow faster provisioning and web page response times. Many of our clients have begun to recognize the value of data mining and a remediation process during the initial phases of an implementation (to help ensure high quality data gets loaded) and beyond (to support ongoing maintenance and business-as-usual processes). A successful program always begins with identifying the data elements and assigning a classification level based on criticality, risk, and availability. It should finish by following through with a remediation process. Dos & Don’ts Here are the most common dos and don'ts that we socialize with our clients, derived from our experience implementing the solution. Dos Don’ts Scope the project into phases with realistic goals. Look for quick wins to show success and value to the stake holders. Avoid “boiling the ocean” and trying to integrate all enterprise applications in the first phase. Establish an enterprise ID (universal unique ID across the enterprise) earlier in the program. Avoid major UI customizations that require code changes. Have a plan in place to patch during the project, which helps alleviate any major issues or roadblocks (product and database). Avoid publishing all the target entitlements if you don't anticipate their usage during access request. Assess your current state and prepare a roadmap to address your operations, tactical and strategic goals, align it with your business priorities. Avoid integrating non-production environments with your production target systems. Defer complex integrations to the later phases and take advantage of lessons learned from previous phases Avoid creating multiple accounts for the same user on the same system, if there is an opportunity to do so. Have an identity and access data quality initiative built into your plan to identify and remediate data related issues early on. Avoid creating complex approval workflows that would negative impact productivity and SLAs. Identify the owner of the identity systems with fair IdM knowledge and empower them with authority to make product related decisions. This will help ensure overcome any design hurdles. Avoid creating complex designs that are not sustainable long term and would need major overhaul during upgrades. Shadow your internal or external consulting resources during the implementation to build the necessary product skills needed to operate and sustain the solution. Avoid treating IAM as a point solution and have appropriate level of communication and training plan for the IT and business users alike. Conclusion In our experience, Identity programs will struggle with scope, proper resourcing, and more. We suggest that companies consider the suggestions discussed in this post and leverage them to help enable their identity and access program. This concludes PwC blog series on R2 for the month and we sincerely hope that the information we have shared thus far has been beneficial. For more information or if you have questions, you can reach out to Rex Thexton, Senior Managing Director, PwC and or Dharma Padala, Director, PwC. We look forward to hearing from you. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0in; line-height:12.0pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Arial","sans-serif"; mso-ascii-font-family:Arial; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Arial; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Meet the Writers: Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL).

    Read the article

  • More on PHP and Oracle 11gR2 Improvements to Client Result Caching

    - by christopher.jones
    Oracle 11.2 brought several improvements to Client Result Caching. CRC is way for the results of queries to be cached in the database client process for reuse.  In an Oracle OpenWorld presentation "Best Practices for Developing Performant Application" my colleague Luxi Chidambaran had a (non-PHP generated) graph for the Niles benchmark that shows a DB CPU reduction up to 600% and response times up to 22% faster when using CRC. Sometimes CRC is called the "Consistent Client Cache" because Oracle automatically invalidates the cache if table data is changed.  This makes it easy to use without needing application logic rewrites. There are a few simple database settings to turn on and tune CRC, so management is also easy. PHP OCI8 as a "client" of the database can use CRC.  The cache is per-process, so plan carefully before caching large data sets.  Tables that are candidates for caching are look-up tables where the network transfer cost dominates. CRC is really easy in 11.2 - I'll get to that in a moment.  It was also pretty easy in Oracle 11.1 but it needed some tiny application changes.  In PHP it was used like: $s = oci_parse($c, "select /*+ result_cache */ * from employees"); oci_execute($s, OCI_NO_AUTO_COMMIT); // Use OCI_DEFAULT in OCI8 <= 1.3 oci_fetch_all($s, $res); I blogged about this in the past.  The query had to include a specific hint that you wanted the results cached, and you needed to turn off auto committing during execution either with the OCI_DEFAULT flag or its new, better-named alias OCI_NO_AUTO_COMMIT.  The no-commit flag rule didn't seem reasonable to me because most people wouldn't be specific about the commit state for a query. Now in Oracle 11.2, DBAs can now nominate tables for caching, either with CREATE TABLE or ALTER TABLE.  That means you don't need the query hint anymore.  As well, the no-commit flag requirement has been lifted.  Your code can now look like: $s = oci_parse($c, "select * from employees"); oci_execute($s); oci_fetch_all($s, $res); Since your code probably already looks like this, your DBA can find the top queries in the database and simply tune the system by turning on CRC in the database and issuing an ALTER TABLE statement for candidate tables.  Voila. Another CRC improvement in Oracle 11.2 is that it works with DRCP connection pooling. There is some fine print about what is and isn't cached, check the Oracle manuals for details.  If you're using 11.1 or non-DRCP "dedicated servers" then make sure you use oci_pconnect() persistent connections.  Also in PHP don't bind strings in the query, although binding as SQLT_INT is OK.

    Read the article

  • Firewall error when running Pando Media Booster (for League of Legends) in wine

    - by Matt2
    When I'm downloading League of Legends using Pando Media Booster in wine, I get an error when starting it: Connection Error Your system is currently not allowing access to our servers. Check your Firewall and/or security software sttings to allow PMB.exe to run. Reluctantly, I disabled ufw, but to no avail. The terminal displays the following multiple times: fixme:msvcp90:_Locinfo__Locinfo_ctor_cat_cstr (0x33fcf8 1 C) semi-stub fixme:dbghelp:EnumerateLoadedModulesW64 If this happens, bump the number in mod fixme:wininet:InternetAttemptConnect Stub fixme:oleacc:CreateStdAccessibleObject 0x4f00bc -4 {618736e0-3c3d-11cf-810c-00aa00389b71} 0xc252d18 fixme:oleacc:CreateStdAccessibleObject 0x3700c0 -4 {618736e0-3c3d-11cf-810c-00aa00389b71} 0xc252958 fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:uxtheme:BeginBufferedPaint Stub (0x1c28 0xcde880 0 (nil) 0xc2f6fe8) fixme:uxtheme:EndBufferedPaint Stub ((nil) 1) fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:uxtheme:EndBufferedPaint Stub ((nil) 1) fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:wininet:InternetAttemptConnect Stub fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:wininet:InternetAttemptConnect Stub fixme:wininet:InternetAttemptConnect Stub fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:advapi:RegisterEventSourceW ((null),L"BugSplat"): stub fixme:advapi:ReportEventW (0xcafe4242,0x0001,0x0000,0x00000001,(nil),0x0003,0x00000000,0x33f224,(nil)): stub err:eventlog:ReportEventW L"Pando_Win" err:eventlog:ReportEventW L"Pando" err:eventlog:ReportEventW L"-1" fixme:advapi:DeregisterEventSource (0xcafe4242) stub fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:wininet:CommitUrlCacheEntryInternal entry already in cache - don't know what to do! fixme:advapi:RegisterEventSourceW ((null),L"BugSplat"): stub fixme:advapi:ReportEventW (0xcafe4242,0x0001,0x0000,0x00000001,(nil),0x0003,0x00000000,0x33f224,(nil)): stub err:eventlog:ReportEventW L"Pando_Win" err:eventlog:ReportEventW L"Pando" err:eventlog:ReportEventW L"-1" fixme:advapi:DeregisterEventSource (0xcafe4242) stub Any idea what's going on here? Is there a better place to put this question?

    Read the article

  • Northwind now available on SQL Azure

    - by jamiet
    Two weeks ago I made available a copy of [AdventureWorks2012] on SQL Azure and published credentials so that anyone from the SQL community could connect up and experience SQL Azure, probably for the first time. One of the (somewhat) popular requests thereafter was to make the venerable Northwind database available too so I am pleased to say that as of right now, Northwind is up there too. You will notice immediately that all of the Northwind tables (and the stored procedures and views too) have been moved into a schema called [Northwind] – this was so that they could be easily differentiated from the existing [AdventureWorks2012] objects. I used an SQL Server Data Tools (SSDT) project to publish the schema and data up to this SQL Azure database; if you are at all interested in poking around that SSDT project then I have made it available on Codeplex for your convenience under the MS-PL license – go and get it from https://northwindssdt.codeplex.com/. Using SSDT proved particularly useful as it alerted me to some aspects of Northwind that were not compatible with SQL Azure, namely that five of the tables did not have clustered indexes: The beauty of using SSDT is that I am alerted to these issues before I even attempt a connection to SQL Azure. Pretty cool, no? Fixing this situation was of course very easy, I simply changed the following primary keys from being nonclustered to clustered: [PK_Region] [PK_CustomerDemographics] [PK_EmployeeTerritories] [PK_Territories] [PK_CustomerCustomerDemo]   If you want to connect up then here are the credentials that you will need: Server mhknbn2kdz.database.windows.net Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly You will need SQL Server Management Studio (SSMS) 2008R2 installed in order to connect or alternatively simply use this handy website: https://mhknbn2kdz.database.windows.net which provides a web interface to a SQL Azure server. Do remember that hosting this database is not free so if you find that you are making use of it please help to keep it available by visiting Paypal and donating any amount at all to [email protected]. To make this easy you can simply hit this link and the details will be completed for you – all you have to do is login and hit the “Send” button. If you are already a PayPal member then it should take you all of about 20 seconds! I hope this is useful to some of you folks out there. Don’t forget that we also have more data up there than in the conventional [AdventureWorks2012], read more at Big AdventureWorks2012. @Jamiet  AdventureWorks on Azure - Provided by the SQL Server community, for the SQL Server community!

    Read the article

  • Using multiple diagrams per model in Entity Framework 5.0

    - by nikolaosk
    I have downloaded .Net framework 4.5 and Visual Studio 2012 since it was released to MSDN subscribers on the 15th of August.For people that do not know about that yet please have a look at Jason Zander's excellent blog post .Since then I have been investigating the many new features that have been introduced in this release.In this post I will be looking into theIn order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 20120 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine. I have also installed in my machine SQL Server 2012 developer edition. I have also downloaded and installed AdventureWorksLT2012 database.You can download this database from the codeplex website.   Before I start showcasing the demo I want to say that I strongly believe that Entity Framework is maturing really fast and now at version 5.0 can be used as your data access layer in all your .Net projects.I have posted extensively about Entity Framework in my blog.Please find all the EF related posts here. In this demo I will show you how to split an entity model into multiple diagrams using the new enhanced EF designer. We will not build an application in this demo.Sometimes our model can become too large to edit or view.In earlier versions we could only have one diagram per EDMX file.In EF 5.0 we can split the model into more diagrams.1) Launch VS 2012. Express edition will work fine.2) Create a New Project. From the available templates choose a Web Forms application  3) Add a new item in your project, an ADO.Net Entity Data Model. I have named it AdventureWorksLT.edmx.Then we will create the model from the database and click Next.Create a new connection by specifying the SQL Server instance and the database name and click OK.Then click Next in the wizard.In the next screen of the wizard select all the tables from the database and hit Finish.4) It will take a while for our .edmx diagram to be created. When I select an Entity (e.g Customer) from my diagram and right click on it,a new option appears "Move to new Diagram".Make sure you have the Model Browser window open.Have a look at the picture below 5) When we do that a new diagram is created and our new Entity is moved there.Have a look at the picture below  6) We can also right-click and include the related entities. Have a look at the picture below. 7) When we do that the related entities are copied to the new diagram.Have a look at the picture below  8) Now we can cut (CTRL+X) the entities from Diagram2 and paste them back to Diagram1.9) Finally another great enhancement of the EF 5.0 designer is that you can change colors in the various entities that make up the model.Select the entities you want to change color, then in the Properties window choose the color of your choice. Have a look at the picture below. To recap we have demonstrated how to split your entity model in multiple diagrams which comes handy in EF models that have a large number of entities in them Hope it helps!!!!

    Read the article

  • How do I connect Ubuntu to Sony Bravia LED TV via HDMI?

    - by VedVals
    My laptop connects to 46" Sony Bravia LED TV using HDMI cable in Windows without any problem. However, it just doesn't work with Ubuntu 12.04. It always says No Signal. Problem somewhat persists after upgrading to 13.04. Bravia detects my laptop as a connection now but I am unable to display anything. What drivers/applications should I install to connect via HDMI? Output of sudo lspci -nn : 00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor Family DRAM Controller [8086:0104] (rev 09) 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port [8086:0101] (rev 09) 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09) 00:16.0 Communication controller [0780]: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 [8086:1c3a] (rev 04) 00:1a.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 [8086:1c2d] (rev 05) 00:1b.0 Audio device [0403]: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller [8086:1c20] (rev 05) 00:1c.0 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 [8086:1c10] (rev b5) 00:1c.1 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 [8086:1c12] (rev b5) 00:1c.3 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 [8086:1c16] (rev b5) 00:1c.4 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 [8086:1c18] (rev b5) 00:1c.5 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 6 [8086:1c1a] (rev b5) 00:1d.0 USB controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 [8086:1c26] (rev 05) 00:1f.0 ISA bridge [0601]: Intel Corporation HM67 Express Chipset Family LPC Controller [8086:1c4b] (rev 05) 00:1f.2 SATA controller [0106]: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller [8086:1c03] (rev 05) 00:1f.3 SMBus [0c05]: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller [8086:1c22] (rev 05) 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF108M [GeForce GT 525M] [10de:0df5] (rev ff) 03:00.0 Network controller [0280]: Intel Corporation Centrino Wireless-N 1030 [Rainbow Peak] [8086:008a] (rev 34) 04:00.0 USB controller [0c03]: NEC Corporation uPD720200 USB 3.0 Host Controller [1033:0194] (rev 04) 06:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168 PCI Express Gigabit Ethernet controller [10ec:8168] (rev 06)

    Read the article

  • Sometimes keyboard & touchpad work... sometimes not

    - by Voyagerfan5761
    When I first ran Ubuntu from CD on this Dell Inspiron 2650, it worked for about ten to fifteen minutes, then it hung (I was probably trying to do too much at once from a Live CD). The next time, my mouse and keyboard didn't work. I rebooted three times and finally got them working. I then installed Ubuntu alongside Windows XP. After installing, selecting the OS in GRUB worked, but my touchpad and keyboard were again not working. I rebooted, and they worked. (I fortunately had a USB mouse with which to reboot.) Booting Ubuntu and then rebooting to enable my keyboard and touchpad has become a routine ever since. Often several reboots are required; at one point I had to reboot over a dozen times in a row before getting a session where everything worked properly. (My installation has been in place for about three days a week now.) I've looked around for a device manager equivalent to no avail. Sometimes the hardware is properly detected, and sometimes it's not. Once or twice I've had the keyboard detected properly but the touchpad not. Plugging in my wireless card also sometimes requires a plug, unplug, and plug again to get it working. So is there some solution? I'm without an Internet connection at home, and this "laptop" is really a wall wart on my desk, so suggestions for packages may take a while to test. Xorg logs I captured two three four sample Xorg logs: one from a startup where the devices worked; one from when they didn't; one from a session where Ubuntu thought my touchpad was a normal mouse; and one from a session where my keyboard worked but the touchpad didn't. See this gist. Updated 2010-12-15 01:50 UTC with Xorg.0.log.keyboardonly file illustrating the case where the keyboard worked but not the touchpad. Updated 2011-01-11 04:10 UTC with Xorg.0.log.touchpadregmouse to illustrate a case where the touchpad was detected as a regular mouse (no "Touchpad" tab in mouse prefs).

    Read the article

  • Bridging 10GbE with 12.04 - bridging works but the bridging computer has no internet access

    - by Donal
    I have been trying to get 12.04 bridging working with two 10GbE cards. I have 2 10GbE cards in a linux box being used only for this bridge, 1 with 2 10GbaseT ports and another with a single CX4 port. I have 2 client computers connected with 10GbaseT cards and the CX4 card connects to a procurve switch. I can get the bridging happening mostly the way that I want, The clients receive dhcp information from the dhcp server (not the bridging machine) and can connect to and properly see the rest of the network. Speeds are ok, not amazing but working on that is another matter. My problem is that the bridging machine has no internet access ... meaning I can't update anything or apt-get anything It can ping all other machines on the local network. I've tried the helpful hints from: https://help.ubuntu.com/community/NetworkConnectionBridge "Enabling Internet Use on the Bridging Computer" and get the following RTNETLINK answers: File exists but dhclient br0 does nothing for me :( I think if it is anything it a multiple route problem as both br0 and eth4 have ipaddresses ... even though I have only set it up so that br0 has one ... Bridge setup details: /etc/network/interface auto br0 iface br0 inet static address 192.168.0.246 netmask 255.255.255.0 gateway 192.168.0.1 broadcast 192.168.0.255 dns-nameservers 192.168.0.1 dns-search example.com dns-domain example.com #(eth2 & eth3 are the 10GbaseT) #(eth4 is the CX4 connection) pre-up ip link set eth2 down pre-up ip link set eth3 down pre-up ip link set eth4 down pre-up brctl addbr br0 pre-up brctl addif br0 eth4 eth3 eth2 pre-up ip addr flush dev eth3 pre-up ip addr flush dev eth2 pre-up ip addr flush dev eth4 post-down ip link set eth4 down post-down ip link set eth2 down post-down ip link set eth3 down post-down ip link set br0 down post-down brctl delif br0 eth2 eth3 eth4 post-down brctl delbr br0 ifconfig -a br0 Link encap:Ethernet HWaddr 00:15:17:22:20:34 inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::215:17ff:fe22:2034/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4957 errors:0 dropped:0 overruns:0 frame:0 TX packets:1077 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:596320 (596.3 KB) TX bytes:139952 (139.9 KB) eth4 Link encap:Ethernet HWaddr 00:60:dd:47:7c:05 inet addr:192.168.0.57 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::260:ddff:fe47:7c05/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:15391 errors:0 dropped:51 overruns:0 frame:0 TX packets:1207 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5916769 (5.9 MB) TX bytes:154312 (154.3 KB) Interrupt:70 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth4 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 br0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 br0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth4

    Read the article

  • /etc/postfix/transport missing; what should it look like?

    - by Thufir
    I'm following the mailman guide but couldn't locate /etc/postfix/ so created it as follows: root@dur:~# root@dur:~# cat /etc/postfix/transport dur.bounceme.net mailman: root@dur:~# root@dur:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 dur.bounceme.net ESMTP Postfix (Ubuntu) ehlo fqdn_test 250-dur.bounceme.net 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN mail from:[email protected] 250 2.1.0 Ok rcpt to:thufir@localhost 451 4.3.0 <thufir@localhost>: Temporary lookup failure rcpt to:[email protected] 451 4.3.0 <[email protected]>: Temporary lookup failure quit 221 2.0.0 Bye Connection closed by foreign host. root@dur:~# root@dur:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# root@dur:~# tail /var/log/mail.log Aug 28 02:05:15 dur postfix/smtpd[20326]: connect from localhost[127.0.0.1] Aug 28 02:06:10 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases is unavailable. open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 02:06:10 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases lookup error for "thufir@localhost" Aug 28 02:06:10 dur postfix/smtpd[20326]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <thufir@localhost>: Temporary lookup failure; from=<[email protected]> to=<thufir@localhost> proto=ESMTP helo=<fqdn_test> Aug 28 02:06:23 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases is unavailable. open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 02:06:23 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases lookup error for "[email protected]" Aug 28 02:06:23 dur postfix/smtpd[20326]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<fqdn_test> Aug 28 02:06:28 dur postfix/smtpd[20326]: disconnect from localhost[127.0.0.1] Aug 28 02:06:49 dur dovecot: pop3-login: Login: user=<thufir>, method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, mpid=20338, TLS Aug 28 02:06:49 dur dovecot: pop3(thufir): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0 root@dur:~# The manual page is here.

    Read the article

< Previous Page | 714 715 716 717 718 719 720 721 722 723 724 725  | Next Page >