Search Results

Search found 1247 results on 50 pages for 'masters degree'.

Page 39/50 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Memory overcommitment on VmWare ESXi 5.0

    - by Tibor
    I would like to understand better the possibilities of VmWare ESXi memory overcommitment. I've read this paper from VmWare, so I am familiar with general concepts, such as hypervisor swapping, memory balooning and page sharing. It seems that a combination of these techniques allows for quite a large degree of overcommitment. However, I am not sure. I am deploying a virtual test lab comprising of 4 identical sets of virtual servers and workstations and a couple of virtual router instances. Overall, I expect to be running around 20 virtual machines with Windows XP, Windows 7 and Ubuntu for workstation hosts as well as CentOS and Windows 2008 Server instances for servers. The problem is, however, that the host machine only has 12GB of RAM and I don't have an option to stuff in some more. I would like to know what is the best option to configure hosts in order to achieve reasonable performance within the constrains. I have these two options: Allocate as little as possible of RAM to each virtual machine. Allocate an extraordinary amount (such as 4 GB per instance) and let the baloon driver do the rest. Something else? Which would work better? Machines will mostly be idle, so I don't have any major performance expectations, but they should run reasonably smoothly nevertheless.

    Read the article

  • What Media Extender / Centre Set up should I use?

    - by Bryn Hird
    I have installed cat6 throughout the house which I use for telephony and network. In my cellar I have a NAS Server, gigabit switch and I want to install a Media Centre to stream my video's, music, photo's and live TV (coax from the aerial to the cellar) over the cat6. Yeah I know I can get stuff on the internet but shared experience of watching TV as a family as it happens is a big plus for live TV. I'm aiming for 1080p. I want different users to be able to watch different channels. Max users = 4. I've played a little with Windows Media Centre, works fine with live TV. Likewise I have XBMC up and running with live TV. The issue I have is what do I put near the TV. I'd like a consistent user interface (grandma and the the other technophobes in the house are continually pestering me on how to use different TVs, change channel, inputs etc.) so a key part of this for me is to make the user experience the same and simple i.e. no keyboards / PCs hanging around the TV. I've just bought a Linksys DMA 2200 to test the Windows Media Centre, but obviously off eBay as they're a dying breed. And with Windows Media Centre removed from Microsoft plans such devices will get rarer. And as for 1080p, think I can forget it with that set up. I have tested XBOX 360, also works but ditto on Microsoft plans for WMC. I was thinking of a WD Live TV to test the XMBC setup. Now to the question. Any advice on Media Centre / Extender setups that will do the job as above and have some degree of futureproofing (building my own with my Raspberry PI is a last resort). I'd like to understand the standards involved in the futureproofing if anyone knows (DNLA, RVU etc.).

    Read the article

  • How can I totally flatten a PDF in Mac OS on the command line?

    - by Matthew Leingang
    I use Mac OS X Snow Leopard. I have a PDF with form fields, annotations, and stamps on it. I would like to freeze (or "flatten") that PDF so that the form fields can't be changed and the annotations/stamps are no longer editable. Since I actually have many of these PDFs, I want to do this automatically on the command line. Some things I've tried/considered, with their degree of success: Open in Preview and Print to File. This creates a totally flat PDF without changing the file size. The only way to automate seems to be to write a kludgy UI-based AppleScript, though, which I've been trying to avoid. Open in Acrobat Pro and use a JavaScript function to flatten. Again, not sure how to automate this on the command line. Use pdftk with the flatten option. But this only flattens form fields, not stamps and other annotations. Use cupsfilter which can create PDF from many file formats. Like pdftk this flattened only the form fields. Use cups-pdf to hook into the Mac's printserver and save a PDF file instead of print. I used the macports version. The resulting file is flat but huge. I tried this on an 8MB file; the flattened PDF was 358MB! Perhaps this can be combined with a ghostscript call as in Ubuntu Tip:Howto reduce PDF file size from command line. Any other suggestions would be appreciated.

    Read the article

  • How intrusive is using VPN?

    - by Slade
    My company lets us work from home sometimes using VPN (during weather emergencies and stuff). When logging in a big window comes up that says the network is private and for employees only and that there's no right to privacy while using VPN. It makes sense that they don't want people poking around their network but I wonder if the company can use the connection to look around my computer while I'm connected. I'm not entirely computer-illiterate but I'm not a networks person at all so the technical documents I've found don't help me. Is that possible, and if so to what degree? UPDATE Thanks Mark. The funneling thing is what I was really asking about. Mostly I was worried that I would already have some IM conversation open or log into eBay forgetting that the VPN was open and that my company IT people would see it or that they would log my eBay password. Thanks again. ANOTHER UPDATE What if my son wants to play online poker or Warcraft etcetera while I have VPN on to work? Can my company think I'm the one playing if I am not typing often?

    Read the article

  • Recommendations for good Unix MTA / groupware solutions? [closed]

    - by Jez
    Possible Duplicate: Exchange server replacement that runs on Linux I'm setting up a Debian server, and one of the things I need on it is an MTA. I don't want to use something like Exim or Postfix because I want something that ties in SMTP, POP3, and IMAP all in one (a la Microsoft Exchange). Most MTAs also seem to be hellishly difficult to configure. Try and read the Exim documentation; you could do a university degree on it (I'm not kidding). When you can get an HTTP server like Cherokee which is easy to configure and has a nice web interface, do MTAs or groupware solutions need to be that hard? I'm aware that some people think "the Unix way" is to have lots of different interacting pieces of software (like maybe an SMTP MTA, POP3 service, webmail service, and overarching manager to tie them all together), but I think this is a situation where that just makes things a lot harder to deal with and one large software suite fits in much more nicely. So, I'm looking for good open source software suites that will run on Debian that: Combine (at least) SMTP, POP3, and IMAP Are easy(ish) to configure Have a nice configuration web interface or GUI Are not defunct projects I don't mind if it's groupware and offers calendaring too, but I would only be using the e-mail functionality for now. Another nice-to-have would be built-in webmail (if we're combining a bunch of functionality, why not?) Note however that I do NOT need Outlook support. I am not really looking for an "Exchange replacement drop-in". The suites I've found so far that seem to match the above criteria (and have appropriate licenses) are Citadel, Kolab, and Zimbra. I'd appreciate anyone who has experience with any of these giving me the pros and cons of them, such as how easy they are to configure and what their performance is like. I'd also appreciate any other suggestions for solutions that fulfil my criteria that I may have missed out.

    Read the article

  • How does everyone set up AWS for PHP with a git workflow while worrying about distributing EC2?

    - by Parris
    Hello, I have been looking for something like heroku but for php, and after much frustration (and almost finding what I need, but not quite) we decided to just go with AWS without any other abstraction. We are using PHP 5.3 (and CakePHP 1.3), and are currently using git. Ubuntu seems like the easiest way to get both of those on there and we will most likely use that. We aren't really going worry about outgoing email. We are using smtp through gmail, but will most likely switch to some other service eventually. I had 3 questions: 1) I have been looking at Zend Server, and I am not quite sure how that is more beneficial than xampp. Perhaps it is not? 2) I suppose to make the application scale we would need multiple instances of some ec2 ami. Then just duplicate it and such. The question then becomes how do we make sure all EC2 instances are up to date? 3) I understand the concept of load balancing to some degree. I understand that in 1 region you select a bunch of servers and have it load balance across them. The question then becomes well how about world wide? How do I make it so that traffic is directed to the correct ec2 server? I have heard of route 53, and tried signing up for that, but nothing appears in my control panel. Also perhaps it is just a DNS thing with my domain registrar? AHHH... some tutorial would be helpful!

    Read the article

  • Different font SIZES in a Text Editor, based on Script(Alphabet) type (ie. per Unicode Code-Block)

    - by fred.bear
    Some non-Latin-based scripts(alphabets) have more detail in their glyphs than do the Latin-based-script equivalents, and typically need a larger font to give the same degree of legibility (resolution-wise). Sometimes, both script types need to be present in the same file. Notepad++ allows different font SIZES (and colour, etc) courtesy of syntax-highlighting. This allows me to display larger-fonted non-Latin-based script in a // BIG-FONT comment. Although this has been quite handy for me in some situations, it is quite limited. A Word Processor can handle this scenario, but I'm not interested in that. I want a nice simple(?) plain(?) Text Editor to do it... on a per script-type basis... eg. mixing Latin-1 and Devanagari (and Mandarin, and ... Such a thing may not exits, but Notepad++ has shown that a simple(?) plain(?) Text Editor is capable of it. Does anyone know of such a Text Editor? ...Q. Why not a Word Processor? ...A. Because GCC and Python don't like that format! but UTF-8 is fine.

    Read the article

  • Mysql InnoDB and quickly applying large updates

    - by Tim
    Basically my problem is that I have a large table of about 17,000,000 products that I need to apply a bunch of updates to really quickly. The table has 30 columns with the id set as int(10) AUTO_INCREMENT. I have another table which all of the updates for this table are stored in, these updates have to be pre-calculated as they take a couple of days to calculate. This table is in the format of [ product_id int(10), update_value int(10) ]. The strategy I'm taking to issue these 17 million updates quickly is to load all of these updates into memory in a ruby script and group them in a hash of arrays so that each update_value is a key and each array is a list of sorted product_id's. { 150: => [1,2,3,4,5,6], 160: => [7,8,9,10] } Updates are then issued in the format of UPDATE product SET update_value = 150 WHERE product_id IN (1,2,3,4,5,6); UPDATE product SET update_value = 160 WHERE product_id IN (7,8,9,10); I'm pretty sure I'm doing this correctly in the sense that issuing the updates on sorted batches of product_id's should be the optimal way to do it with mysql / innodb. I'm hitting a weird issue though where when I was testing with updating ~13 million records, this only took around 45 minutes. Now I'm testing with more data, ~17 million records and the updates are taking closer to 120 minutes. I would have expected some sort of speed decrease here but not to the degree that I'm seeing. Any advice on how I can speed this up or what could be slowing me down with this larger record set? As far as server specs go they're pretty good, heaps of memory / cpu, the whole DB should fit into memory with plenty of room to grow.

    Read the article

  • Multiple SSL certificates on one server

    - by Kyle O'Brien
    We're hosting two websites on our fairly tiny but dedicated production server. Both website require SSL authentication. So, we have virtualhosts set up for both of them. They both reference their own domain.key, domain.crt and domain.intermediate.crt files. Each CSR and certificate file for each site was setup using its own unique information and nothing is shared between them (other than the server itself) However, which ever site's symbolic link (set up in /etc/apache2/sites-enabled) is reference first, is the site who's certificate is referenced even if we're visiting the second site. So for example, assume our companies are Cadbury and Nestle. We set up both sites with their own certificates but we create Cadbury's symbolic link in apache's site-enabled folder first and then Nestle's. You can visit Nestle perfectly fine but if you check the certificate installation, it reference's Cadbury's certificate. We're hosting these websites on a dedicated Ubuntu 12.04.3 LTS server. Both certificates are provided by Thawte.com. I came across a few potential solutions with no degree of success. I'm hoping someone else has a decent solution? Thanks Edit: The only other solution that seems to have provided success to some people is using SNI with Apache. However, the setups here didn't seem to coincide with our setup at all.

    Read the article

  • SanjayP&rsquo;s venture after Microsoft involves no Microsoft

    - by eddraper
    When I was at Microsoft, I always found Sanjay Parthasarathy to be a bright and passionate leader.  While he was a bit disconnected at times with what was really going on out in the trenches, I always thought he was true believer in what we in Developer Platform and Evangelism (DPE) were doing.  He got it.  He had started DPE and kicked a lot of doors down up in Redmond to make it happen.  Back in the early 2000s, battles over platform choices at large customers was trench warfare… bayonets and hand grenades at the P-Code level.  This model was not at all suited to Microsoft’s org structure at the time.  While there were plenty of people fully able to have competitive conversations around Windows Server, or AD, or Exchange, or the desktop, there weren’t many that could have deep technical conversations around Java vs .NET and the platform “stack” as a cohesive, unified unit of value.  This task fell to DPE. Sanjay ended up leaving Microsoft a number of months before me in 2009 and I remember thinking these exact words: “holy shit, SanjayP left Microsoft.”  When SanjayP left DPE years before that,  Sheila Gulati had stepped into his shoes and I thought we where starting to miss a beat.  Sheila had built an amazing business at Microsoft India, but I don’t recall being inspired by her as a leader.  SanjayP’s talks felt like the opening scene of “Patton” with George C. Scott pacing in front of the American flag.  Sheila was a voice on a con-call.  When she moved on in 2007, Walid Abu-Hadba was given the reigns.  Personally, I don’t ever recall even seeing his face.  I think I might recall hearing his voice on some con-calls, but for all intents and purposes he was invisible to me.  Perhaps this was the beginning of my carelessness around seeking “visibility.” Fast forward to Build 2011.  First off, we have no PDC – we have Build.  Microsoft had made an 11 year investment by this time in building an organization to make its technology relevant to developers.  One would think such an org would be in the driver’s seat of such an event, but we see Windows product group people on the podiums.  Watching, I could see the messaging unfold… but no story.  It was like the old days.  Demos and PowerPoints by team members building the tech, and in many cases VPs.  The ensuing confusion is almost legendary now.  Windows 8 was, and is, a pretty big deal… but who is telling the story – not just features and benefits, but the story around how it all fits together. Having been out of Microsoft for two years now, and looking in, I can only conclude that the “DPE of old” has at best been emasculated, and at worst been completely marginalized by internal politics, or perhaps the eternal march of the corporate entropy generator that resides at all large companies.  I don’t think this is a good thing for anyone. And now, back to Sanjay who is the father of Microsoft DPE… I noticed that he has moved back to India and is doing start-up work.  His current company Indix looks to be doing some interesting things with “big data” and here’s their stack: Nary a trace of anything Microsoft.  What could account for this?  I wonder….  Better availability of labor and expertise in India for this stack?  Donno, but even in India, leet R and Hadoop skills have to be hard to find. Technical superiority?  This, I sincerely doubt. This stack, with SanjayP’s name as CEO leaves me with an unsettling feeling.  If he did believe, he no longer does.  One doesn’t place bets with real money on things they don’t believe in.  Perhaps he never did believe, and was a corporate creature seeking to find a niche for himself after which he manipulated me and others.  Or perhaps… anger… be it passive aggression or an outright “in your face F*** you” to his former masters. I guess in the end, only he knows the true reason… But I have my theory...

    Read the article

  • Clean Up the New Ubuntu Grub2 Boot Menu

    - by Trevor Bekolay
    Ubuntu adopted the new version of the Grub boot manager in version 9.10, getting rid of the old problematic menu.lst. Today we look at how to change the boot menu options in Grub2. Grub2 is a step forward in a lot of ways, and most of the annoying menu.lst issues from the past are gone. Still, if you’re not vigilant with removing old versions of the kernel, the boot list can still end up being longer than it needs to be. Note: You may have to hold the SHIFT button on your keyboard while booting up to get this menu to show. If only one operating system is installed on your computer, it may load it automatically without displaying this menu. Remove Old Kernel Entries The most common clean up task for the boot menu is to remove old kernel versions lying around on your machine. In our case we want to remove the 2.6.32-21-generic boot menu entries. In the past, this meant opening up /boot/grub/menu.lst…but with Grub2, if we remove the kernel package from our computer, Grub automatically removes those options. To remove old kernel versions, open up Synaptic Package Manager, found in the System > Administration menu. When it opens up, type the kernel version that you want to remove in the Quick search text field. The first few numbers should suffice. For each of the entries associated with the old kernel (e.g. linux-headers-2.6.32-21 and linux-image-2.6.32-21-generic), right-click and choose Mark for Complete Removal. Click the Apply button in the toolbar and then Apply in the summary window that pops up. Close Synaptic Package Manager. The next time you boot up your computer, the Grub menu will not contain the entries associated with the removed kernel version. Remove Any Option by Editing /etc/grub.d If you need more fine-grained control, or want to remove entries that are not kernel versions, you must change the files located in /etc/grub.d. /etc/grub.d contains files that hold the menu entries that used to be contained in /boot/grub/menu.lst. If you want to add new boot menu entries, you would create a new file in this folder, making sure to mark it as executable. If you want to remove boot menu entries, as we do, you would edit files in this folder. If we wanted to remove all of the memtest86+ entries, we could just make the 20_memtest86+ file non-executable, with the terminal command sudo chmod –x 20_memtest86+ Followed by the terminal command sudo update-grub Note that memtest86+ was not found by update-grub because it will only consider executable files. However, instead, we’re going to remove the Serial console 115200 entry for memtest86+… Open a terminal window Applications > Accessories > Terminal. In the terminal window, type in the command: sudo gedit /etc/grub.d/20_memtest86+ The menu entries are found at the bottom of this file. Comment out the menu entry for serial console 115200 by adding a “#” to the start of each line. Save and close this file. In the terminal window you opened, enter in the command sudo update-grub Note: If you don’t run update-grub, the boot menu options will not change! Now, the next time you boot up, that strange entry will be gone, and you’re left with a simple and clean boot menu. Conclusion While changing Grub2’s boot menu may seem overly complicated to legacy Grub masters, for normal users, Grub2 means that you won’t have to change the boot menu that often. Fortunately, if you do have to do it, the process is still pretty easy. For more detailed information about how to change entries in Grub2, this Ubuntu forum thread is a great resource. If you’re using an older version of Ubuntu, check out our article on how to clean up Ubuntu grub boot menu after upgrades. Similar Articles Productive Geek Tips Clean Up Ubuntu Grub Boot Menu After UpgradesReinstall Ubuntu Grub Bootloader After Windows Wipes it OutChange the GRUB Menu Timeout on UbuntuHow To Switch to Console Mode for Ubuntu VMware GuestSet Windows as Default OS when Dual Booting Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Daily Motivator (Firefox) FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation"

    Read the article

  • WebGL First Person Camera - Matrix issues

    - by Ryan Welsh
    I have been trying to make a WebGL FPS camera.I have all the inputs working correctly (I think) but when it comes to applying the position and rotation data to the view matrix I am a little lost. The results can be viewed here http://thistlestaffing.net/masters/camera/index.html and the code here var camera = { yaw: 0.0, pitch: 0.0, moveVelocity: 1.0, position: [0.0, 0.0, -70.0] }; var viewMatrix = mat4.create(); var rotSpeed = 0.1; camera.init = function(canvas){ var ratio = canvas.clientWidth / canvas.clientHeight; var left = -1; var right = 1; var bottom = -1.0; var top = 1.0; var near = 1.0; var far = 1000.0; mat4.frustum(projectionMatrix, left, right, bottom, top, near, far); viewMatrix = mat4.create(); mat4.rotateY(viewMatrix, viewMatrix, camera.yaw); mat4.rotateX(viewMatrix, viewMatrix, camera.pitch); mat4.translate(viewMatrix, viewMatrix, camera.position); } camera.update = function(){ viewMatrix = mat4.create(); mat4.rotateY(viewMatrix, viewMatrix, camera.yaw); mat4.rotateX(viewMatrix, viewMatrix, camera.pitch); mat4.translate(viewMatrix, viewMatrix, camera.position); } //prevent camera pitch from going above 90 and reset yaw when it goes over 360 camera.lockCamera = function(){ if(camera.pitch > 90.0){ camera.pitch = 90; } if(camera.pitch < -90){ camera.pitch = -90; } if(camera.yaw <0.0){ camera.yaw = camera.yaw + 360; } if(camera.yaw >360.0){ camera.yaw = camera.yaw - 0.0; } } camera.translateCamera = function(distance, direction){ //calculate where we are looking at in radians and add the direction we want to go in ie WASD keys var radian = glMatrix.toRadian(camera.yaw + direction); //console.log(camera.position[3], radian, distance, direction); //calc X coord camera.position[0] = camera.position[0] - Math.sin(radian) * distance; //calc Z coord camera.position[2] = camera.position [2] - Math.cos(radian) * distance; console.log(camera.position [2] - (Math.cos(radian) * distance)); } camera.rotateUp = function(distance, direction){ var radian = glMatrix.toRadian(camera.pitch + direction); //calc Y coord camera.position[1] = camera.position[1] + Math.sin(radian) * distance; } camera.moveForward = function(){ if(camera.pitch!=90 && camera.pitch!=-90){ camera.translateCamera(-camera.moveVelocity, 0.0); } camera.rotateUp(camera.moveVelocity, 0.0); } camera.moveBack = function(){ if(camera.pitch!=90 && camera.pitch!=-90){ camera.translateCamera(-camera.moveVelocity, 180.0); } camera.rotateUp(camera.moveVelocity, 180.0); } camera.moveLeft = function(){ camera.translateCamera(-camera.moveVelocity, 270.0); } camera.moveRight = function(){ camera.translateCamera(-camera.moveVelocity, 90.0); } camera.lookUp = function(){ camera.pitch = camera.pitch + rotSpeed; camera.lockCamera(); } camera.lookDown = function(){ camera.pitch = camera.pitch - rotSpeed; camera.lockCamera(); } camera.lookLeft = function(){ camera.yaw= camera.yaw - rotSpeed; camera.lockCamera(); } camera.lookRight = function(){ camera.yaw = camera.yaw + rotSpeed; camera.lockCamera(); } . If there is no problem with my camera then I am doing some matrix calculations within my draw function where a problem might be. //position cube 1 worldMatrix = mat4.create(); mvMatrix = mat4.create(); mat4.translate(worldMatrix, worldMatrix, [-20.0, 0.0, -30.0]); mat4.multiply(mvMatrix, worldMatrix, viewMatrix); setShaderMatrix(); gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer); gl.vertexAttribPointer(shaderProgram.attPosition, 3, gl.FLOAT, false, 8*4,0); gl.vertexAttribPointer(shaderProgram.attTexCoord, 2, gl.FLOAT, false, 8*4, 3*4); gl.vertexAttribPointer(shaderProgram.attNormal, 3, gl.FLOAT, false, 8*4, 5*4); gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, myTexture); gl.uniform1i(shaderProgram.uniSampler, 0); gl.useProgram(shaderProgram); gl.drawArrays(gl.TRIANGLES, 0, vertexBuffer.numItems); //position cube 2 worldMatrix = mat4.create(); mvMatrix = mat4.create(); mat4.multiply(mvMatrix, worldMatrix, viewMatrix); mat4.translate(worldMatrix, worldMatrix, [40.0, 0.0, -30.0]); setShaderMatrix(); gl.drawArrays(gl.TRIANGLES, 0, vertexBuffer.numItems); //position cube 3 worldMatrix = mat4.create(); mvMatrix = mat4.create(); mat4.multiply(mvMatrix, worldMatrix, viewMatrix); mat4.translate(worldMatrix, worldMatrix, [20.0, 0.0, -100.0]); setShaderMatrix(); gl.drawArrays(gl.TRIANGLES, 0, vertexBuffer.numItems); camera.update();

    Read the article

  • Tactics for using PHP in a high-load site

    - by Ross
    Before you answer this I have never developed anything popular enough to attain high server loads. Treat me as (sigh) an alien that has just landed on the planet, albeit one that knows PHP and a few optimisation techniques. I'm developing a tool in PHP that could attain quite a lot of users, if it works out right. However while I'm fully capable of developing the program I'm pretty much clueless when it comes to making something that can deal with huge traffic. So here's a few questions on it (feel free to turn this question into a resource thread as well). Databases At the moment I plan to use the MySQLi features in PHP5. However how should I setup the databases in relation to users and content? Do I actually need multiple databases? At the moment everything's jumbled into one database - although I've been considering spreading user data to one, actual content to another and finally core site content (template masters etc.) to another. My reasoning behind this is that sending queries to different databases will ease up the load on them as one database = 3 load sources. Also would this still be effective if they were all on the same server? Caching I have a template system that is used to build the pages and swap out variables. Master templates are stored in the database and each time a template is called it's cached copy (a html document) is called. At the moment I have two types of variable in these templates - a static var and a dynamic var. Static vars are usually things like page names, the name of the site - things that don't change often; dynamic vars are things that change on each page load. My question on this: Say I have comments on different articles. Which is a better solution: store the simple comment template and render comments (from a DB call) each time the page is loaded or store a cached copy of the comments page as a html page - each time a comment is added/edited/deleted the page is recached. Finally Does anyone have any tips/pointers for running a high load site on PHP. I'm pretty sure it's a workable language to use - Facebook and Yahoo! give it great precedence - but are there any experiences I should watch out for? Thanks, Ross

    Read the article

  • I’m a Phoenix… and I’m miffed

    - by Stan Spotts
    For personal reasons, almost 30 years ago I left school to enter the workforce. I decided late 2008 to go back to school and finish my degree. After the expected loss of credits for a transfer, from Temple University to University of Phoenix, I'm now about 75% done. The experience has been interesting. Classes are time compressed, only 5 weeks each. Because I have a family and a full time job, I'm taking one at a time. Even so, I've written more papers in these classes than I ever wrote at Temple. My own papers are one thing, but the team papers give me heartburn since I can't completely control what goes into them. Not a big deal except that they make up 30% of our grade. In any case, most of the class facilitators have been great. I had great ones for Accounting, Finance, and frankly most others. I've had a few (4, maybe) cases where I was less than 2 points from an A, and asked the facilitator if I could get any of my work reviewed to see if I could get those extra points. I figured it was worth a shot, and there were no extenuating circumstances to help make my case. I think that only one facilitator decided after a review of one paper that my interpretation was good, just not what he expected, and gave me another point, which gave me an A. So while none are pushovers, they've all been open to discussion, which is as much as I should expect. Overall, good experience. That is, until my last class. On the second week, the day I was due to hand in my personal assignment for the week, I was in an accident. An SUV creamed my little Ford Focus, and totaled it (estimated repair over $11K). I was pretty banged up, especially my left shoulder. I was scheduled for rotator cuff surgery for two weeks later, and getting hit against the door really made it worse. After dealing with the police, the EMT, the tow truck, and the Percocet and Flexeril for the pain, I crashed for the night and didn't get to upload my paper until the next day. The instructor took 30% off for it being late, even after I supplied photos of the car, my arm (huge bruises), and offered to supply the police report number. I figured I'd be okay since that's 2.7 points, and I could lose up to 5 before jeopardizing an A grade. Well, that wasn't the case as we lost more points than I expected on our team paper in Week 5. I ended up with a 94.3. Yes, 7/10 of a point from an A. Of course I asked the instructor to review the issue with the accident and give me just the 0.7 points I needed for the A. That got me a short response of "I have received your emails and review your work over the last five weeks. Your current grade will stand. If you would like to dispute your grade then please feel free to contact your academic advisor. I wish you much success in your professional and academic career." Brrrr….! So I asked my academic advisor to file a dispute. If it wasn't that a pretty bad car accident was the cause, I wouldn't have. Without the grade reduction, I would have had a 97 for the class, so I'll argue that I was performing at the A level throughout the class. Why her purported "review" of my work didn't then warrant such a minor adjustment, I don't know. An A- drops my GPA, and this ticked me off. Now I have to wait and see what the school says about the grade dispute.

    Read the article

  • How to install SharePoint Server 2013 Preview

    - by ybbest
    The Office 2013 and SharePoint Server 2013 Preview is announced yesterday and as a SharePoint Developer, I am really excited to learn all the new features and capabilities. Today I will show you how to install the preview. 1. Create a service account called SP2013Install and give this account Dbcreator and SecurityAdmin in SQL Server 2012 2. You need to run the following script to set the ‘maxdegree of parellism’ setting to the required value of 1 in SQL Server 2012(using sysadmin privilege) before configure the SharePoint Farm. Otherwise , you might get the error ‘This SQL Server Instance does not have the required maxdegree of parellism setting of 1’ sp_configure 'show advanced options', 1; GO RECONFIGURE WITH OVERRIDE; GO sp_configure 'max degree of parallelism', 1; GO RECONFIGURE WITH OVERRIDE; GO 3. Download the SharePoint preview from here and I am going to install it on Windows Server 2008R2 with SQL2012. 4. Click the Install software prerequisites, this works fine with the internet connection. (However, if you do not have internet connection, it is a bit tricky to install window azure AppFabric as it has to be installed using the prerequisite installer. Your computer might reboot a few times in the process.) 5.After the prerequisites are installed `completely, you can then install the Preview. Click the Install SharePoint Server and Enter the Product key you get from the Preview download page. 6. Accept the License terms and Click Next. 7. Leave the default path for the file location. 8. You can now start the installation process 9. After binary files are installed, you then can configure your farm using the farm configuration wizard. 10.Specify the Database server and the install account 11. Specify SharePoint farm passphrase. 12 Specify the port number , you should choose your own favorite port number. 13. Choose Create a New Server Farm and click next. 14. Double-check with the settings and click Next to Configure the farm install. 15. Finally, your farm is configured successfully and you now are able to go to your Central Admin site http://sp2010:6666/ 16. You should configure the services manually or automate using PowerShell (If you like to understand why,you can read the blog post here) ,however I will use the wizard to configure automatically here  as  this is a test machine. After the configuration is complete, you now be able to see your SharePoint Site. 17.To start the evaluate the Preview , you need to install Visual Studio 2012 RC , Microsoft Office Developer Tools for Visual Studio 2012,SharePoint 2013 Designer Preview , Office 2013 Preview. References: Download SharePoint2013 Server 2013 Download Microsoft Visio Professional 2013 Preview Install SharePoint 2013 Preview Hardware and software requirements for SharePoint 2013 Preview SharePoint 2013 IT Pro and Developer training materials released Plan for SharePoint 2013 Preview Microsoft Office Developer Tools for Visual Studio 2012 SharePoint 2013 Preview Office365 for the SharePoint 2013 preview SharePoint Designer 2013 Download: Microsoft Office 2013 Preview Language Pack Try Office

    Read the article

  • So No TECH job so far.

    - by Ratman21
    O I found some temp work for the US Census and I have managed to keep the house (so far) but, it looks like I/we are going to have to do a short sale and the temp job will be ending soon.   On top of that it looks like the unemployment fund for me is drying up. I will have about one month left after the Census job is done. I am now down to Appling for work at the KFC.   This is type a work I started with, before I was a tech geek and really I didn’t think I would be doing this kind of work in my later years but, I have a wife and kid. So I got to suck it up and do it.   Oh and here is my new resume…go ahead I know you want to tare it up. I really don’t care any more.   Scott L. Newman 45219 Dutton Way, Callahan, FL32011 H: (904)879-4880 C: (352)356-0945 E: [email protected] Web:  http://beingscottnewman.webs.com/                                                       ______                                                                                 OBJECTIVE To obtain a Network or Technical support position     KEYWORD SUMMARY CompTIA A+, Network+, and Security+ Certified., Network Operation, Technical Support, Client/Vendor Relations, Networking/Administration, Cisco Routers/Switches, Helpdesk, Microsoft Office Suite, Website Design/Dev./Management, Frame Relay, ISDN, Windows NT/98/XP, Visio, Inventory Management, CICS, Programming, COBOL IV, Assembler, RPG   QUALIFICATIONS SUMMARY Twenty years’ experience in computer operations, technical support, and technical writing. Also have two and half years’ experience in internet / intranet operations.   PROFESSIONAL EXPERIENCE October 2009 – Present*   Volunteer Web site and PC technician – Part time       True Faith Christian Fellowship Church – Callahan, FL, Project: Create and maintain web site for Church to give it a worldwide exposure Aug 2008 – September 2009:* Volunteer Church sound and video technician – Part time      Thomas Creek Baptist Church – Callahan, FL   *Note Jobs were for the learning and/or keeping updated on skills, while looking for a tech job and training for new skills.   February 2005 to October 2008: Client Server Dev/Analyst I, Fidelity National Information Services, Jacksonville, FL (FNIS acquired Certegy in 2005 and out of 20 personal, was one of three kept on.) August 2003 to February 2005: Senior NetOps Operator, Certegy, St.Pete, Fl. (August 2003, Certegy terminated contract with EDS and out of 40 personal, was one of six kept on.) Projects: Creation and update of listing and placement for all raised floor equipment at St.Pete site. Listing was made up of, floor plan of the raised floor and equipment racks diagrams showing the placement of all devices using Visio. This was cross-referenced with an inventory excel document showing what dept was responsible for each device. Sole creator of Network operation and Server Operation procedures guide (NetOps Guide).  Expertise: Resolving circuit and/or router issues or assist circuit carrier in resolving issue from the company Network Operation Center (NOC). As well as resolving application problems or assist application support in resolution of it.     July 1999 to August 2003: Senior NetOps Operator,EDS (Certegy Account), St.Pete, FL Same expertise and on going projects as listed above for FNIS/Certegy. (Equifax outsourced the NetOps dept. to EDS in 1999)         January 1991 to July 1999: NetOps/Tandem Operator, Equifax, St.Pete & Tampa, FL Same as all of the above for FNIS/Certegy/EDS except for circuit and router issues   EDUCATION ? New Horizons Computer Learning Center, Jacksonville, Florida - CompTIA A+, Security+, and     Network+ Certified.                        Currently working on CCNA Certification 07/30/10 ? Mott Community College, Flint, Michigan – Associates Degree - Data Processing and General Education ? Currently studying Japanese

    Read the article

  • cf3 Can't stat ... in files.copyfrom promise

    - by Xerxes
    On the client: # cf-agent -KIv ... cf3 -> Handling file existence constraints on /etc/cfengine3 cf3 -> Copy file /etc/cfengine3 from /srv/cfengine/sysconf/server/inputs check cf3 No existing connection to 172.31.69.83 is established... cf3 Set cfengine port number to 5308 = 5308 cf3 -> Connect to 172.31.69.83 = 172.31.69.83 on port 5308 cf3 LastSaw host 172.31.69.83 now cf3 Loaded /var/lib/cfengine3/ppkeys/root-172.31.69.83.pub cf3 .....................[.h.a.i.l.]................................. cf3 Strong authentication of server=172.31.69.83 connection confirmed cf3 Server returned error: Unspecified server refusal (see verbose server output) cf3 Can't stat /srv/cfengine/sysconf/server/inputs in files.copyfrom promise cf3 ?> defining promise result class Cfengine_Inputs_Updated_Failed .... cf3 ......................................................... cf3 Promise handle: cf3 Promise made by: [cf-agent.cf ] FAILED 172.31.69.83:///srv/cfengine/sysconf/server/inputs -> localhost:///etc/cfengine3 However, on the server (172.31.69.83), there's no reason why it can't stat the directory: cyrus:/srv/cfengine/sysconf/server# ls -l /srv/cfengine/sysconf/server/inputs total 52 -rw-r--r-- 1 root root 2142 Sep 6 21:54 cf-agent.cf -rw-r--r-- 1 root root 831 Sep 6 18:31 cf-execd.cf -rw-r--r-- 1 root root 4517 Sep 6 21:44 cf-serverd.cf -rw-r--r-- 1 root root 3082 Sep 6 21:44 dns.cf -rw-r--r-- 1 root root 2028 Sep 6 15:12 failsafe.cf -rw-r--r-- 1 root root 5966 Sep 6 21:44 ldap-masters.cf -rw-r--r-- 1 root root 4380 Sep 6 18:31 ldap-security.cf -rw-r--r-- 1 root root 2735 Sep 6 08:21 lib-core.cf -rw-r--r-- 1 root root 1506 Sep 6 21:45 lib-utils.cf -rw-r--r-- 1 root root 2635 Sep 6 20:27 lib-vars.cf -rw-r--r-- 1 root root 2057 Sep 3 17:46 nss.cf -rw-r--r-- 1 root root 1472 Sep 6 18:31 packages.cf -rw-r--r-- 1 root root 1257 Sep 6 18:01 pam-security.cf -rw-r--r-- 1 root root 4019 Sep 6 19:32 promises.cf -rw-r--r-- 1 root root 2808 Sep 3 17:22 site.cf -rw-r--r-- 1 root root 1670 Sep 6 18:31 sudo-security.cf -rw-r--r-- 1 root root 831 Sep 6 18:31 sys-security.cf -rw-r--r-- 1 root root 890 Sep 6 18:31 sys-users.cf cyrus:/srv/cfengine/sysconf/server# I don't see anything interesting server side either when running: /usr/sbin/cf-serverd -d4 --verbose --no-fork And the following does not have any complaints: /usr/sbin/cf-promises -v Any ideas? I'm running cfengine3 on debian, v3.0.5+dfsg-1 - and the cf-agent.cf file is as follows: bundle agent Update { files: linux:: "${cf3.path[inputs]}" action => immediate, move_obstructions => "true", depth_search => Recursive, copy_from => MirrorFrom( "${cf3.host[server]}", "${cf3.path[scm-inputs]}", "true", "0400" ), classes => DefineSoftClass("Cfengine_Inputs_Updated") ; "${cf3.path[sbin]}" comment => "Setting cf3 client sbin scripts: ${cf3.path[sbin]}/", action => immediate, depth_search => Recursive, copy_from => MirrorFrom( "${cf3.host[server]}", "${cf3.path[scm-cnt-scripts]}", "false", "0555" ) ; reports: Cfengine_Inputs_Updated:: "[cf-agent.cf ] Services:CFAgent:Inputs:Updated"; Cfengine_Inputs_Updated_Failed:: "[cf-agent.cf ] FAILED ${cf3.host[server]}://${cf3.path[scm-inputs]} -> localhost://${cf3.path[inputs]}"; } I lie, there is something interesting with a little more debugging... AccessControl(/srv/cfengine/sysconf/server/inputs) AccessControl, match(/srv/cfengine/sysconf/server/inputs,client.com.au) encrypt request=1 Examining rule in access list (/srv/cfengine/sysconf/server/inputs,/home/cfengine)? cf3 Host client.com.au denied access to /srv/cfengine/sysconf/server/inputs Unappending Host client.com.au denied access to /srv/cfengine/sysconf/server/inputs cf3 Access control in sync Unappending Access control in sync Transaction Send[t 59][Packed text] Attempting to send 67 bytes SendSocketStream, sent 67 cf3 From (host=client.com.au,user=root,ip=172.31.69.3) Unappending From (host=client.com.au,user=root,ip=172.31.69.3) cf3 REFUSAL of request from connecting host: (SYNCH 1283777156 STAT /srv/cfengine/sysconf/server/inputs) Unappending REFUSAL of request from connecting host: (SYNCH 1283777156 STAT /srv/cfengine/sysconf/server/inputs) RecvSocketStream(8) cf3 -> Accepting a connection I'll keep looking.

    Read the article

  • Regular Expressions Cookbook Is in The Money—Win a Copy

    - by Jan Goyvaerts
    %COOKBOOKFRAME%You may have heard some people say that most book authors never get any royalties. That’s not true because most authors get an advance royalty that is paid before the book is published. That’s the author’s main incentive for writing the book, at least as far as money is concerned. (If money is your main concern, don’t write books.) What is true is that most authors never see any money beyond the advance royalty. Royalty rates are very low. A 10% royalty of the publisher’s price is considered normal. The publisher’s price is usually 45% of the retail price. So if you pay full price in a bookstore, the author gets 4.5% of your money. If there’s more than one author, they split the royalty. It doesn’t take a math degree to figure out that a book needs to sell quite a few copies for the royalty to add up to a meaningful amount of money. But Steven and I must have done something right. Regular Expressions Cookbook is in the money. My royalty statement for the 3rd quartier of 2009, which is the 2nd quarter that the book was on the market, came with a check. I actually received it last month but didn’t get around to blogging about. The amount of the check is insignificant. The point is that the balance is no longer negative. I’m taking this opportunity to pat myself and my co-author on the back. To celebrate the occassion O’Reilly has offered to sponsor a give-away of five (5) copies of Regular Expressions Cookbook. These are the rules of the game: You must post a comment to this blog article including your actual name and actual email address. Names are published, email addresses are not. Comments are moderated by myself (Jan Goyvaerts). If I consider a comment to be offensive or spam it will not be published and not be eligible for any prize. If you don’t know what to say in the comment, just wish me a happy 100000nd birthday, so I don’t have to feel so bad about entering the 6-bit era. Each person commenting has only one chance to win, regardless of the number of comments posted. O’Reilly will be provided with the names and email addresses of the winners (and those email addresses only) in order to arrange delivery. Each winner can choose to receive a printed copy or ebook (DRM-free PDF). If you choose the printed book, O’Reilly pays for shipping to anywhere in the world but not for any duties or taxes your country may impose on books imported from the USA. If you choose the ebook, you’ll need to create an O’Reilly account that is then granted access to the PDF download. You can make your choice after you’ve won, so it doesn’t influence your chance of winning. Contest ends 28 February 2010, GMT+7 (Thai time). Chosen by five calls to Random(78)+1 in Delphi 2010, the winners are: 48: Xiaozu 45: David Chisholm 19: Miquel Burns 33: Aaron Rice 17: David Laing Thanks to everybody who participated. The winners have been notified by email on how to collect their prize.

    Read the article

  • SQL SERVER – Vacation, Travel and Study – A New Concept

    - by pinaldave
    Quite often when developers go to training sessions they either find it very boring because of study or great because they treat it as a vacation. There should be a perfect balance between study and extra activities. Studying is Boring Studying is very hard. Nobody likes to study, very few people are going to list “studying” as one of their favorite hobbies.  Already my young daughter knows she doesn’t want to study, and I don’t want to either.  If you read my blog regularly you know that I am always saying that we need to be students for life.  However, all philosophy aside, if you are put in a room with an instructor to study for eight hours a day, you are going to feel bored, uncomfortable, and unhappy.  I was a trainer myself, and I understand that all-day study sessions are no fun – even for the trainer.  I always tried to be entertaining, but even eight hours of jokes and laughter is tiresome.  Eight hours at a comedy club would be boring after a little while – and if we can’t even enjoy fun stuff for eight hours straight, how can we expect to study for eight hours straight? Studying for Career or Certification Even those who have advanced degrees and went to college for years, or even decades, find studying hard.  There is a difference between studying for a career and studying for a certification.  At least to get a degree there is a variety of subjects, with labs, exams, and practice problems to make things more interesting.  You can also choose your major and what you want to spend your time studying.  For certification you do not have this luxury.  You have to learn and memorize specific parts, and there is no option to change your major if you don’t like it.  Your option is to gain your certification, or fail.  Many people will find that last option unacceptable. Studying at Vacation We have established: eight hours of uninterrupted study is boring.  That is why I am so excited about what my very good friend is doing with Koenig Solutions.  His whole goal was to make classes that are intensive but not in a traditional format.  He adds in aspects of the vacation.  It is true that you will study and sit with instructors for six or eight hours a day, but in the mornings and evenings you can go out and see the sights in exotic locations.  He has chosen the locations for his training courses for their proximity to tourist attractions like the Himalayas, the Taj Mahal, and Goa, India’s most popular resort town.  Every location has access to great experiences like river rafting, safari tours, or meditation.  There are five locations to choose from: Dehli, Dehradun, Shimla (close to the Himalayas), Goa Beach, and Dubai.  After a day of classes and hours of sight-seeing, you will be more than ready to return to campus tired and ready to study.  This is the kind of study I can do! My friend’s point is that studying and fun can still go hand-in-hand.  How many times have we heard a professor say this?  But this time it is true.  There is great fun in learning in exotic locations.  If you want to travel in India and are interested in also taking the opportunity to learn something, let Koenig Solutions know. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Weekend Project – Experimenting with ACID Transactions, SQL Compliant, Elastically Scalable Database

    - by pinaldave
    Database technology is huge and big world. I like to explore always beyond what I know and share the learning. Weekend is the best time when I sit around download random software on my machine which I like to call as a lab machine (it is a pretty old laptop, hardly a quality as lab machine) and experiment it. There are so many free betas available for download that it’s hard to keep track and even harder to find the time to play with very many of them.  This blog is about one you shouldn’t miss if you are interested in the learning various relational databases. NuoDB just released their Beta 7.  I had already downloaded their Beta 6 and yesterday did the same for 7.   My impression is that they are onto something very very interesting.  In fact, it might be something really promising in terms of database elasticity, scale and operational cost reduction. The folks at NuoDB say they are working on the world’s first “emergent” database which they tout as a brand new transitional database that is intended to dramatically change what’s possible with OLTP.  It is SQL compliant, guarantees ACID transactions, yet scales elastically on heterogeneous and decentralized cloud-based resources. Interesting note for sure, making me explore more. Based on what I’ve seen so far, they are solving the architectural challenge that exists between elastic, cloud-based compute infrastructures designed to scale out in response to workload requirements versus the traditional relational database management system’s architecture of central control. Here’s my experience with the NuoDB Beta 6 so far: First they pretty much threw away all the features you’d associate with existing RDBMS architectures except the SQL and ACID transactions which they were smart to keep.  It looks like they have incorporated a number of the big ideas from various algorithms, systems and techniques to achieve maximum DB scalability. From a user’s perspective, the NuoDB Beta software behaves like any other traditional SQL database and seems to offer all the benefits users have come to expect from standards-based SQL solutions. One of the interesting feature is that one can run a transactional node and a storage node on my Windows laptop as well on other platforms – indeed interesting for sure. It’s quite amazing to see a database elastically scale across machine boundaries. So, one of the basic NuoDB concepts is that as you need to scale out, you can easily use more inexpensive hardware when/where you need it.  This is unlike what we have traditionally done to scale a database for an application – we replace the hardware with something more powerful (faster CPU and Disks). This is where I started to feel like NuoDB is on to something that has the potential to elastically scale on commodity hardware while reducing operational expense for a big OLTP database to a degree we’ve never seen before. NuoDB is able to fully leverage the cloud in an asynchronous and highly decentralized manner – while providing both SQL compliance and ACID transactions. Basically what NuoDB is doing is so new that it is all hard to believe until you’ve experienced it in action.  I will keep you up to date as I test the NuoDB Beta 7 but if you are developing a web-scale application or have an on-premise app you are thinking of moving to the cloud, testing this beta is worth your time. If you do try it, let me know what you think.  Before I say anything more, I am going to do more experiments and more test on this product and compare it with other existing similar products. For me it was a weekend worth spent on learning something new. I encourage you to download Beta 7 version and share your opinions here. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Bancassurers Seek IT Solutions to Support Distribution Model

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, attended the third annual Bancassurance Forum in Vienna last month. He reports that the outlook for bancassurance in EMEA remains positive, despite changing market conditions that have led a number of bancassurers to re-examine their business models. Vienna is at the crossroads between mature Western European markets, where bancassurance is now an established best practice, and more recently tapped Eastern European markets that offer the greatest growth potential. Attendance at the Bancassurance Forum was good, with 87 bancassurance attendees, most in very senior positions in the industry. The conference provided the chance for a lively discussion among bancassurers looking to keep abreast of the latest trends in one of Europe's most successful distribution models for insurance. Even under normal business conditions, there is a great demand for best practice sharing within the industry as there is no standard formula for success.  Each company has to chart its own course and choose the strategies for sales, products development and the structure of ownership that make sense for their business, and as soon as they get it right bancassurers need to adapt the mix to keep up with ever changing regulations, completion and economic conditions.  To optimize the overall relationship between banking and insurance for mutual benefit, a balance needs to be struck between potentially conflicting interests. The banking side of the house is looking for greater wallet share from its customers and the ability to increase profitability by bundling insurance products with higher margins - especially in light of the recent economic crisis, where margins for traditional banking products are low and completion high. The insurance side of the house seeks access to new customers through a complementary distribution channel that is efficient and cost effective. To make the relationship work, it is important that both sides of the same house forge strategic and long term relationships - irrespective of whether the underlying business model is supported by a distribution agreement, cross-ownership or other forms of capital structure. However, this third annual conference was not held under normal business conditions. The conference took place in challenging, yet interesting times. ING's forced spinoff of its insurance operations under pressure by the EU Commission and the troubling losses suffered by Allianz as a result of the Dresdner bank sale were fresh in everyone's mind. One year after markets crashed, there is now enough hindsight to better understand the implications for bancassurance and best practices that are emerging to deal with them. The loan-driven business that has been crucial to bancassurance up till now evaporated during the crisis, leaving bancassurers grappling with how to change their overall strategy from a loan-driven to a more diversified model.  Attendees came to the conference to learn what strategies were working - not only to cope with the market shift, but to take advantage of it as markets pick up. Over the course of 14 customer case studies and numerous analyst presentations, topical issues ranging from getting the business model right to the impact on capital structuring of Solvency II were debated openly. Many speakers alluded to the need to specifically design insurance products with the banking distribution channel in mind, which brings with it specific requirements such as a high degree of standardization to achieve efficiency and reduce training costs. Moreover, products must be engineered to suit end consumers who consider banks a one-stop shop. The importance of IT to the successful implementation of bancassurance strategies was a theme that surfaced regularly throughout the conference.  The cross-selling opportunity - that will ultimately determine the success or failure of any bancassurance model - can only be fully realized through a flexible IT architecture that enables banking and insurance processes to be integrated and presented to front-line staff through a common interface. However, the reality is that most bancassurers have legacy IT systems, which constrain the businesses' ability to implement new strategies to maintaining competitiveness in turbulent times. My colleague Glenn Lottering, who chaired the conference, believes that the primary opportunities for bancassurers to extract value from their IT infrastructure investments lie in distribution management, risk management with the advent of Solvency II, and achieving operational excellence. "Oracle is ideally suited to meet the needs of bancassurance," Glenn noted, "supplying market-leading software for both banking and insurance. Oracle provides adaptive systems that let customers easily integrate hybrid business processes from both worlds while leveraging existing IT infrastructure." Overall, the consensus at the conference was that the outlook for bancassurance in EMEA remains positive, despite changing market conditions that have led a number of bancassurers to re-examine their business models. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.    

    Read the article

  • How to install SharePoint Server 2013 Preview

    - by ybbest
    The Office 2013 and SharePoint Server 2013 Preview is announced yesterday and as a SharePoint Developer, I am really excited to learn all the new features and capabilities. Today I will show you how to install the preview. 1. Create a service account called SP2013Install and give this account Dbcreator and SecurityAdmin in SQL Server 2012 2. You need to run the following script to set the ‘maxdegree of parellism’ setting to the required value of 1 in SQL Server 2012(using sysadmin privilege) before configure the SharePoint Farm. Otherwise , you might get the error ‘This SQL Server Instance does not have the required maxdegree of parellism setting of 1’ sp_configure 'show advanced options', 1; GO RECONFIGURE WITH OVERRIDE; GO sp_configure 'max degree of parallelism', 1; GO RECONFIGURE WITH OVERRIDE; GO 3. Download the SharePoint preview from here and I am going to install it on Windows Server 2008R2 with SQL2012. 4. Click the Install software prerequisites, this works fine with the internet connection. (However, if you do not have internet connection, it is a bit tricky to install window azure AppFabric as it has to be installed using the prerequisite installer. Your computer might reboot a few times in the process.) 5.After the prerequisites are installed `completely, you can then install the Preview. Click the Install SharePoint Server and Enter the Product key you get from the Preview download page. 6. Accept the License terms and Click Next. 7. Leave the default path for the file location. 8. You can now start the installation process 9. After binary files are installed, you then can configure your farm using the farm configuration wizard. 10.Specify the Database server and the install account 11. Specify SharePoint farm passphrase. 12 Specify the port number , you should choose your own favorite port number. 13. Choose Create a New Server Farm and click next. 14. Double-check with the settings and click Next to Configure the farm install. 15. Finally, your farm is configured successfully and you now are able to go to your Central Admin site http://sp2010:6666/ 16. You should configure the services manually or automate using PowerShell (If you like to understand why,you can read the blog post here) ,however I will use the wizard to configure automatically here  as  this is a test machine. After the configuration is complete, you now be able to see your SharePoint Site. 17.To start the evaluate the Preview , you need to install Visual Studio 2012 RC , Microsoft Office Developer Tools for Visual Studio 2012,SharePoint 2013 Designer Preview , Office 2013 Preview. References: Download SharePoint2013 Server 2013 Download Microsoft Visio Professional 2013 Preview Install SharePoint 2013 Preview Hardware and software requirements for SharePoint 2013 Preview SharePoint 2013 IT Pro and Developer training materials released Plan for SharePoint 2013 Preview Microsoft Office Developer Tools for Visual Studio 2012 SharePoint 2013 Preview Office365 for the SharePoint 2013 preview SharePoint Designer 2013 Download: Microsoft Office 2013 Preview Language Pack Try Office

    Read the article

  • To sample or not to sample...

    - by [email protected]
    Ideally, we would know the exact answer to every question. How many people support presidential candidate A vs. B? How many people suffer from H1N1 in a given state? Does this batch of manufactured widgets have any defective parts? Knowing exact answers is expensive in terms of time and money and, in most cases, is impractical if not impossible. Consider asking every person in a region for their candidate preference, testing every person with flu symptoms for H1N1 (assuming every person reported when they had flu symptoms), or destructively testing widgets to determine if they are "good" (leaving no product to sell). Knowing exact answers, fortunately, isn't necessary or even useful in many situations. Understanding the direction of a trend or statistically significant results may be sufficient to answer the underlying question: who is likely to win the election, have we likely reached a critical threshold for flu, or is this batch of widgets good enough to ship? Statistics help us to answer these questions with a certain degree of confidence. This focuses on how we collect data. In data mining, we focus on the use of data, that is data that has already been collected. In some cases, we may have all the data (all purchases made by all customers), in others the data may have been collected using sampling (voters, their demographics and candidate choice). Building data mining models on all of your data can be expensive in terms of time and hardware resources. Consider a company with 40 million customers. Do we need to mine all 40 million customers to get useful data mining models? The quality of models built on all data may be no better than models built on a relatively small sample. Determining how much is a reasonable amount of data involves experimentation. When starting the model building process on large datasets, it is often more efficient to begin with a small sample, perhaps 1000 - 10,000 cases (records) depending on the algorithm, source data, and hardware. This allows you to see quickly what issues might arise with choice of algorithm, algorithm settings, data quality, and need for further data preparation. Instead of waiting for a model on a large dataset to build only to find that the results don't meet expectations, once you are satisfied with the results on the initial sample, you can  take a larger sample to see if model quality improves, and to get a sense of how the algorithm scales to the particular dataset. If model accuracy or quality continues to improve, consider increasing the sample size. Sampling in data mining is also used to produce a held-aside or test dataset for assessing classification and regression model accuracy. Here, we reserve some of the build data (data that includes known target values) to be used for an honest estimate of model error using data the model has not seen before. This sampling transformation is often called a split because the build data is split into two randomly selected sets, often with 60% of the records being used for model building and 40% for testing. Sampling must be performed with care, as it can adversely affect model quality and usability. Even a truly random sample doesn't guarantee that all values are represented in a given attribute. This is particularly troublesome when the attribute with omitted values is the target. A predictive model that has not seen any examples for a particular target value can never predict that target value! For other attributes, values may consist of a single value (a constant attribute) or all unique values (an identifier attribute), each of which may be excluded during mining. Values from categorical predictor attributes that didn't appear in the training data are not used when testing or scoring datasets. In subsequent posts, we'll talk about three sampling techniques using Oracle Database: simple random sampling without replacement, stratified sampling, and simple random sampling with replacement.

    Read the article

  • Where is the value of OEA

    - by [email protected]
    In a room full of architects, if you were to ask for the definition of enterprise architecture, or the importance thereof,  you are likely to get a number of varying view points ranging from,  a complete analysis of the digital assets of an organization,  to, a strategic alignment of business goals/objectives to IT initiatives.  Similiarily in a room full of senior business executives,  if you asked them how they see their IT groups and their effectiveness to align to business strategy,  you would get a myriad of responses,  ranging from, “a huge drain on our bottom line”, “always more expensive than budgeted”, “lack of agility,  by the time IT is ready,  my business strategy has changed”, and on the rare occurrence, “ a leader of innovation,  that is lock step with my business strategy”. However does this necessarily demonstrate the overall value of enterprise architecture.  Having a framework, and process is of critical importance to help produce a number of the artefacts that ultimately align technology goals and initiatives to business strategy,  however,  is that really where the value is?  I believe that first we need to understand the concept of value.  Value typically is a measure of sorts,  when we purchase a product it’s value is equivalent to the maximum amount that someone is willing to pay for the product,  however,  is the same equation valid in terms of the business value of enterprise architecture? Is the library of artefacts generated through a process/framework, inclusive of a strategic roadmap to realize the enterprise architecture where the value is? If we agree that enterprise architecture is the alignment of IT and IT assets to support business strategy, and by achieving our business strategy, we have we have increased the business value of the enterprise then;  it seems that, in order to really identify the true value of an enterprise architecture,  we need to understand how we measure business value .  A number of formal measurement methodologies exist for this purpose, business models, balanced scorecards, etc   After we have an understanding on how to measure the business value of each of the organizational units within an enterprise, then we understand how the enterprise architecture contributes to the success of business strategy,  and EXECUTE on the roadmap to implement, and deliver the IT initiatives that provide MEASUREABLE returns, As we analyse the value chain of each of the individual organizational units within the enterprise we may identify how that unit has performed by quantitatively measuring it proximity to achieving the goals defined by the business for each unit. However, It would appear that true business value (the aggregate of all of the business units in the value chain), is to some degree subjectively measured  as for public companies this lies in shareholder value,  as the true value, or be it, the maximum amount that someone would pay for shares of an organization.

    Read the article

  • My Take on Hadoop World 2011

    - by Jean-Pierre Dijcks
    I’m sure some of you have read pieces about Hadoop World and I did see some headlines which were somewhat, shall we say, interesting? I thought the keynote by Larry Feinsmith of JP Morgan Chase & Co was one of the highlights of the conference for me. The reason was very simple, he addressed some real use cases outside of internet and ad platforms. The following are my notes, since the keynote was recorded I presume you can go and look at Hadoopworld.com at some point… On the use cases that were mentioned: ETL – how can I do complex data transformation at scale Doing Basel III liquidity analysis Private banking – transaction filtering to feed [relational] data marts Common Data Platform – a place to keep data that is (or will be) valuable some day, to someone, somewhere 360 Degree view of customers – become pro-active and look at events across lines of business. For example make sure the mortgage folks know about direct deposits being stopped into an account and ensure the bank is pro-active to service the customer Treasury and Security – Global Payment Hub [I think this is really consolidation of data to cross reference activity across business and geographies] Data Mining Bypass data engineering [I interpret this as running a lot of a large data set rather than on samples] Fraud prevention – work on event triggers, say a number of failed log-ins to the website. When they occur grab web logs, firewall logs and rules and start to figure out who is trying to log in. Is this me, who forget his password, or is it someone in some other country trying to guess passwords Trade quality analysis – do a batch analysis or all trades done and run them through an analysis or comparison pipeline One of the key requests – if you can say it like that – was for vendors and entrepreneurs to make sure that new tools work with existing tools. JPMC has a large footprint of BI Tools and Big Data reporting and tools should work with those tools, rather than be separate. Security and Entitlement – how to protect data within a large cluster from unwanted snooping was another topic that came up. I thought his Elephant ears graph was interesting (couldn’t actually read the points on it, but the concept certainly made some sense) and it was interesting – when asked to show hands – how the audience did not (!) think that RDBMS and Hadoop technology would overlap completely within a few years. Another interesting session was the session from Disney discussing how Disney is building a DaaS (Data as a Service) platform and how Hadoop processing capabilities are mixed with Database technologies. I thought this one of the best sessions I have seen in a long time. It discussed real use case, where problems existed, how they were solved and how Disney planned some of it. The planning focused on three things/phases: Determine the Strategy – Design a platform and evangelize this within the organization Focus on the people – Hire key people, grow and train the staff (and do not overload what you have with new things on top of their day-to-day job), leverage a partner with experience Work on Execution of the strategy – Implement the platform Hadoop next to the other technologies and work toward the DaaS platform This kind of fitted with some of the Linked-In comments, best summarized in “Think Platform – Think Hadoop”. In other words [my interpretation], step back and engineer a platform (like DaaS in the Disney example), then layer the rest of the solutions on top of this platform. One general observation, I got the impression that we have knowledge gaps left and right. On the one hand are people looking for more information and details on the Hadoop tools and languages. On the other I got the impression that the capabilities of today’s relational databases are underestimated. Mostly in terms of data volumes and parallel processing capabilities or things like commodity hardware scale-out models. All in all I liked this conference, it was great to chat with a wide range of people on Oracle big data, on big data, on use cases and all sorts of other stuff. Just hope they get a set of bigger rooms next time… and yes, I hope I’m going to be back next year!

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >