Search Results

Search found 2001 results on 81 pages for 'matt thompson'.

Page 78/81 | < Previous Page | 74 75 76 77 78 79 80 81  | Next Page >

  • Roll your own free .NET technical conference

    - by Brian Schroer
    If you can’t get to a conference, let the conference come to you! There are a ton of free recorded conference presentations online… Microsoft TechEd Let’s start with the proverbial 800 pound gorilla. Recent TechEds have recorded the majority of presentations and made them available online the next day. Check out presentations from last month’s TechEd North America 2012 or last week’s TechEd Europe 2012. If you start at http://channel9.msdn.com/Events/TechEd, you can also drill down to presentations from prior years or from other regional TechEds (Australia, New Zealand, etc.) The top presentations from my “View Queue”: Damian Edwards: Microsoft ASP.NET and the Realtime Web (SignalR) Jennifer Smith: Design for Non-Designers Scott Hunter: ASP.NET Roadmap: One ASP.NET – Web Forms, MVC, Web API, and more Daniel Roth: Building HTTP Services with ASP.NET Web API Benjamin Day: Scrum Under a Waterfall NDC The Norwegian Developer Conference site has the most interesting presentations, in my opinion. You can find the videos from the June 2012 conference at that link. The 2011 and 2010 pages have a lot of presentations that are still relevant also. My View Queue Top 5: Shay Friedman: Roslyn... hmmmm... what? Hadi Hariri: Just ‘cause it’s JavaScript, doesn’t give you a license to write rubbish Paul Betts: Introduction to Rx Greg Young: How to get productive in a project in 24 hours Michael Feathers: Deep Design Lessons ØREDEV Travelling on from Norway to Sweden... I don’t know why, but the Scandinavians seem to have this conference thing figured out. ØREDEV happens each November, and you can find videos here and here. My View Queue Top 5: Marc Gravell: Web Performance Triage Robby Ingebretsen: Fonts, Form and Function: A Primer on Digital Typography Jon Skeet: Async 101 Chris Patterson: Hacking Developer Productivity Gary Short: .NET Collections Deep Dive aspConf - The Virtual ASP.NET Conference Formerly known as “mvcConf”, this one’s a little different. It’s a conference that takes place completely on the web. The next one’s happening July 17-18, and it’s not too late to register (It’s free!). Check out the recordings from February 2011 and July 2010. It’s two years old and talks about ASP.NET MVC2, but most of it is still applicable, and Jimmy Bogard’s Put Your Controllers On a Diet presentation is the most useful technical talk I have ever seen. CodeStock Videos from the 2011 edition of this Tennessee conference are available. Presentations from last month’s 2012 conference should be available soon here. I’m looking forward to watching Matt Honeycutt’s Build Your Own Application Framework with ASP.NET MVC 3. UserGroup.tv User Group.tv was founded in January of 2011 by Shawn Weisfeld, with the mission of providing User Group content online for free. You can search by date, group, speaker and category tags. My View Queue Top 5: Sergey Rathon & Ian Henehan: UI Test Automation with Selenium Rob Vettor: The Repository Pattern Latish Seghal: The .NET Ninja’s Toolbelt Amir Rajan: Get Things Done With Dynamic ASP.NET MVC Jeffrey Richter: .NET Nuggets – Houston TechFest Keynote

    Read the article

  • Customer Support Spotlight: Clemson University

    - by cwarticki
    I've begun a Customer Support Spotlight series that highlights our wonderful customers and Oracle loyalists.  A week ago I visited Clemson University.  As I travel to visit and educate our customers, I provide many useful tips/tricks and support best practices (as found on my blog and twitter). Most of all, I always discover an Oracle gem who deserves recognition for their hard work and advocacy. Meet George Manley.  George is a Storage Engineer who has worked in Clemson's Data Center all through college, partially in the Hardware Architecture group and partially in the Storage group. George and the rest of the Storage Team work with most all of the storage technologies that they have here at Clemson. This includes a wide array of different vendors' disk arrays, with the most of them being Oracle/Sun 2540's.  He also works with SAM/QFS, ACSLS, and our SL8500 Tape Libraries (all three Oracle/Sun products). (pictured L to R, Matt Schoger (Oracle), Mark Flores (Oracle) and George Manley) George was kind enough to take us for a data center tour.  It was amazing.  I rarely get to see the inside of data centers, and this one was massive. Clemson Computing and Information Technology’s physical resources include the main data center located in the Information Technology Center at the Innovation Campus and Technology Park. The core of Clemson’s computing infrastructure, the data center has 21,000 sq ft of raised floor and is powered by a 14MW substation. The ITC power capacity is 4.5MW.  The data center is the home of both enterprise and HPC systems, and is staffed by CCIT staff on a 24 hour basis from a state of the art network operations center within the ITC. A smaller business continuance data center is located on the main campus.  The data center serves a wide variety of purposes including HPC (supercomputing) resources which are shared with other Universities throughout the state, the state's medicaid processing system, and nearly all other needs for Clemson University. Yes, that's no typo (14,256 cores and 37TB of memory!!! Thanks for the tour George and thank you very much for your time.  The tour was fantastic. I enjoyed getting to know your team and I look forward to many successes from Clemson using Oracle products. -Chris WartickiGlobal Customer Management

    Read the article

  • Ubuntu 10.10 forgets desktop theme.

    - by Marcelo Cantos
    (I posed this question on superuser.com and haven't received any answers or comments, then I came across this site, so my apologies to anyone who has seen this already.) I am running Ubuntu in VirtualBox (on a Windows 7 host). Several times now, the top-level menu bar, the task bar — and seemingly every system dialog — have forgotten the out-of-the-box "Ambiance" theme they conform to when I first installed the system. Window captions still preserve the theme, but pretty much nothing else does. I have searched high and low on Google for assistance with this problem. Everything I've found suggests either running some gconf reset or deleting .gconf* .gnome* and other similar directories. I have followed all this advice and nothing works. I still get a boring Windows-95-style gray 3D look and feel. On previous occasions, after much messing around I've given up and rebooted the VM instance, and been pleasantly suprised to see the original "Ambience" theme restored throughout the UI, but invariably it disappears again some time later, usually after a reboot, so I can never figure out what I did that broke it. Here's a sample from Ubuntu's site of what I want it to look like. And here's a screenshot of my system as it currently looks. Also note that my GNOME Terminals normally have a nice purple semi-translucent look, and as can be seen from the screenshot, they are now just a solid matt white. This last time (just yesterday), trying numerous combinations all the usual tricks and rebooting several times hasn't fixed it, so here I am on SU wondering: How do I recover the out-of-the-box theme for my Gnome/Ubuntu desktop, noting that blowing away all config files — as suggested in many places online — fails to achieve this? It might help to know that it seems to fail either after I resize the VM instance, forcing the Ubuntu desktop to resize itself, or after I play around with Compiz settings. I haven't been able to figure out which of these it is, and it could be neither. Given the amount of pain I have had to go through to get things back to normal (and given that I am at a loss as to how to do so), it has proven difficult to definitively isolate the cause.

    Read the article

  • How to get tens of millions of pages indexed by Google bot?

    - by Chris Adragna
    We are currently developing a site that currently has 8 million unique pages that will grow to about 20 million right away, and eventually to about 50 million or more. Before you criticize... Yes, it provides unique, useful content. We continually process raw data from public records and by doing some data scrubbing, entity rollups, and relationship mapping, we've been able to generate quality content, developing a site that's quite useful and also unique, in part due to the breadth of the data. It's PR is 0 (new domain, no links), and we're getting spidered at a rate of about 500 pages per day, putting us at about 30,000 pages indexed thus far. At this rate, it would take over 400 years to index all of our data. I have two questions: Is the rate of the indexing directly correlated to PR, and by that I mean is it correlated enough that by purchasing an old domain with good PR will get us to a workable indexing rate (in the neighborhood of 100,000 pages per day). Are there any SEO consultants who specialize in aiding the indexing process itself. We're otherwise doing very well with SEO, on-page especially, besides, the competition for our "long-tail" keyword phrases is pretty low, so our success hinges mostly on the number of pages indexed. Our main competitor has achieved approx 20MM pages indexed in just over one year's time, along with an Alexa 2000-ish ranking. Noteworthy qualities we have in place: page download speed is pretty good (250-500 ms) no errors (no 404 or 500 errors when getting spidered) we use Google webmaster tools and login daily friendly URLs in place I'm afraid to submit sitemaps. Some SEO community postings suggest a new site with millions of pages and no PR is suspicious. There is a Google video of Matt Cutts speaking of a staged on-boarding of large sites, too, in order to avoid increased scrutiny (at approx 2:30 in the video). Clickable site links deliver all pages, no more than four pages deep and typically no more than 250(-ish) internal links on a page. Anchor text for internal links is logical and adds relevance hierarchically to the data on the detail pages. We had previously set the crawl rate to the highest on webmaster tools (only about a page every two seconds, max). I recently turned it back to "let Google decide" which is what is advised.

    Read the article

  • links for 2011-01-12

    - by Bob Rhubart
    WebCenter Spaces 11g PS2 Template Customization (Javier Ductor's Blog) "Recently, we have been involved in a WebCenter Spaces customization project. A customer sent us a prototype website in HTML, and we had to transform Spaces to set the same look and feel as in the prototype..." Javier Ductor (tags: oracle otn webcenter enteprise2.0) Matt Carter: Risky Business "Incorporating risk detection and mitigation capabilities into apps is becoming all the rage. There are plenty of real-life examples of cases where prevention of cyber-security threats and fraudsters might have kept governments and companies out of the news, and with more money in their accounts." (tags: oracle otn security middleware) John Brunswick: 5 Surprisingly Good Benefits of Corporate Blogs "Some may still propose that not all corporations are going to be able to provide the five benefits above and are more focused around shameless self promotion of products and services.  If that is the case, that corporation is most likely not producing something of high value." - John Brunswick (tags: oracle otn enterprise2.0 blogging) InfoQ: IT And Architecture: Inside-Out Perspectives The software industry is in disarray, costs are escalating, and quality is diminishing. Promises of newer technologies and processes and methodologies in IT are still far from materializing on any significant scale. Bruce Laidlaw and Michael Poulin - each with more than 30 years of experience compared notes on the past and present of IT and provide insights on what IT needs to make progress. (tags: ping.fm) SOA & Middleware: Canceling a running composite instance - example Useful tips from Niall Commiskey. (tags: soa middleware oracle) BPEL 11.1.1.2 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations (Oracle E-Business Suite Technology) "A new certification was released simultaneously with the E-Business Suite 12.1.3 Maintenance Pack late last year: the use of BPEL 11g Version 11.1.1.2 with E-Business Suite 12.1.3." -- Steven Chan (tags: oracle bpel) Marc Kelderman: OSB: Deploy Service Level Agreement (SLA), aka Alert Rule "The big issue with these SLAs is the deployment. If you have dozens of services, with multiple operations, and you have a lot of environments it takes a while to create them...[But] I have a nice workaround." - Mark Kelderman  (tags: oracle otn soa osb sla) @myfear: Java EE 7 - what's coming up for 2012? First hints. "Even if the actual Java EE 6 version is still not too widespread, we already have seen the first signs of the next EE 7 version written to the sky." -- Markus "myfear" Eisele (tags: oracle otn oracleace java)

    Read the article

  • Gamification in the enterprise updates, September edition

    - by erikanollwebb
    Things have been a little busy here at GamifyOracle.  Last week, I attended a small conference in San Diego on Enterprise Gamification.  Mario Herger of SAP, Matt Landes of Google and I were on a panel discussion about how to introduce and advocate gamification in your organization.  I gave a talk as well as a workshop on gamification.  The workshop was a new concept, to take our Design Jam from Applications User Experience and try it with people outside of user experience.  I have to say, the whole thing was a great success, in great part because I had some expert help from Teena Singh from Apps UX.  We took a flow from expense reporting and created a scenario about sales reps who are on the road a lot and how we needed them to get their expense reports filed by the end of the fiscal year.  We divided the attendees into groups and gave them a little over two hours to work out how they might use game mechanics to gamify the flows.   We even took the opportunity to re-use the app our fab dev team in our Mexico Development Center put together to gamify the event including badges, points, prizes and a leaderboard.  Since I am a firm believer that you can't gamify everything (or at least, not everything well), I focused my talk prior to the workshop on when it works, and when it might not, including pitfalls to gamifying badly.  I was impressed that the teams all considered what might go wrong with gamifying expenses and built into their designs some protections against that.  I can't wait to take this concept on the road again, it really was a fun day. Now that we have gotten through that set of events, we're wildly working on our next project for next week.  I'm doing a focus group at Oracle OpenWorld on Gamification in the Enterprise.  To do that, Andrea Cantu and I are trying to kill as many trees as possible while we work out some gamification concepts to present (see proof below!).  It should be a great event and I'm hoping we learn a lot about what our customers think about the use of gamification in their companies and in the products they use. So that's the news so far from GamifyOracle land.  I'll try to get more out about those events and more after next week. And if you will be at OOW, ping me and we can discuss in person!  I'd love to know what everyone is thinking in the area.

    Read the article

  • The Birth of SSAS Compare

    - by Red Gate Software BI Tools Team
    Noemi Moreno, Red Gate Business Intelligence Specialist Software vendors – even Microsoft – tend to forget about the needs of business intelligence developers. We are a rare and rather invisible species. For example, BIDS remained in VS 2008 until SQL Server 2012. It took until this release before we got something as simple as an “undo” function. Before I joined Red Gate as a BI specialist, I worked on SQL Development. I’ll never forget the time I discovered Red Gate’s SQL Compare tool and how it reduced the task of preparing a database release from a couple of days to ten minutes. When I moved to SSAS, MDX and cubes, I became frustrated with the deployment process because I couldn’t find a tool that made Cube releases as easy as they are with SQL Compare. This became my quest. I pitched the idea to a few people in Red Gate’s regular Down Tools Week, when everyone puts down their day-to-day tasks and works on their own projects. My task was to reason with a roomful of cynical developers, hardened to the blandishments of project managers, for help to develop a tool that would compare two different SSAS databases and create the script to process only the objects that needed processing, thereby reducing release time to only a few minutes. I walked to the podium and gave them the full story of the distressed BI specialists, doomed to spend tedious hours preparing deployment scripts. A few developers recovered from their torpor to cast a languid eye at my presentation. It wasn’t enough. In a sudden impulse, I blurted out a promise to perform a flamenco dance for just the team if the tool was able to successfully compare two SSAS databases and generate a script by the end of the week. I was lucky enough that some of them believed me and jumped in: David Pond (Dev), Matt Burton (Dev), Tilman Bregler (Dev), Shobana Sekar (Test), Ruchija Raj (Test), Nick Sutherland (Product Manager) and Irma Tanovic (BI). They didn’t know that Irma and I would be away on a conference in Amsterdam and would leave them without our support. But to my surprise, they had a working tool by the time we came back – basic, and with a few bugs, but a working tool nonetheless! Seeing it compare a very basic SSAS database, detect the changes and generate the scripts was amazing! Something that normally takes half a day was done in under a minute. Since then, a few months have passed and a BI Tools team has been created at Red Gate to work full time on BI tools for BI developers, starting with SSAS Compare. How cool is that? So download the free beta and give us your feedback. And the flamenco? I still need to deliver that. Tilman reminds me every day! I need to get the full flamenco costume.

    Read the article

  • Oracle Social Network Developer Challenge: HarQen Nodal

    - by Kellsey Ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. We wrapped the Oracle Social Network Developer Challenge last week at OpenWorld, and this week, I’ll be sharing all the entries. All the teams that entered our challenge did a ton of work and built really interesting integrations with Oracle Social Network, and I want to showcase their hard work and innovative ideas. Today, I give you Nodal from the HarQen (@harqen) team, Kris Gösser (@krisgosser), Jesse Vogt (@jesse_vogt) and Matt Stockton (@mstockton). The guys from HarQen built Nodal to provide a visual way to navigate your connections and conversations in Oracle Social Network and view relationships. Using Nodal, you can: Search through names and profiles in Oracle Social Network. Choose people and view their social graphs in a visually useful way. Expand nodes in the social graph and add that person’s social graph to the Nodal view for comparison. Move nodes around and lock them in place for easier viewing, using a physics engine for movement. Adjust the physics engine properties according to your viewing preferences. Select nodes in the social graph and create a conversation directly based on the selection. Here are some shots of Nodal. They really don’t do the physics engine justice, but maybe the guys at Harqen will post a video of what they did for your viewing pleasure. #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; }   Nodal’s visuals wowed the judges and the audience, and anyone with a decent-sized social network presence understands the need for good network visualization. Tools like Nodal allow you to discover hidden connections in your network and maximize the value of your weak ties and find mavens, a very important key to getting work done. Thanks to the HarQen team for participating in our challenge. We hope they had a good experience. Look for the details of the other entries this week.

    Read the article

  • Convert Mac font to Windows format?

    - by Moshe
    I have a font that was used on Mac OS X 10.5. Mac recognizes it as a font file. Windows Vista shows it as a "File". Changing the file extension doesn't work. I tried both .otf and .ttf and neither worked. (Surprise! I didn't thinks so, but I'd be a fool for asking if that was the answer, so I tried...) Perhaps I need to try a different extension? Are there any utilities to convert the font? A Windows equivalent? It's called Franco. (FrancoNormal, actually.) Thanks. EDIT: DFontSplitter didn't work. I saw something online about "data fork" and "resource fork" that has to match up. Can someone please explain that? Do I need both for a font conversion? What do I tell my graphic designer to send me? EDIT 2: The font doesn't work when downloaded to Mac OS X either. (A different machine from the original.) "Get Info" reveals that there is no file extension, leading me to believe that this is the newer font format. Where wold I find the the "data fork" and "resource fork" on the Mac? I want to be able to tell the designer exactly where to look. EDIT 3: DFontSplitter works on the original computer but not on my PC. I converted and emailed myself the fonts from the graphic designers laptop. I guess it has to do with the data being stored in a "fork" whatnot. Thanks, Matt for that article. Thank you again for your help!

    Read the article

  • Scanning website for vulnerablities

    - by Kristen
    I have found that the local school's website installed a Perl Calendar - this was years ago, it has not been used for ages, but Google has it indexed (which is how I found it) and it full of Viagra links and the like ... program was by Matt Kruse, here is details of the exploit: http://www.securiteam.com/exploits/5IP040A1QI.html I've got the school to remove that, but I think they also have MySQL installed and I'm aware that out-of-the-box there have been some exploits of Admin Tools / Login in old versions. For all I know they also have PHPBB and the like installed ... The school is just using some cheap, shared hosting; the HTTP response header I get is: Apache/1.3.29 (Unix) (Red-Hat/Linux) Chili!Soft-ASP/3.6.2 mod_ssl/2.8.14 OpenSSL/0.9.6b PHP/4.4.9 FrontPage/5.0.2.2510 I'm looking for some means of checking if they have other junk installed (quite possibly from way back, and now unused) that might put the site at risk. I'm more interested in something that can scan for things like the MySQL Admin exploit rather than open ports etc. My guess is that they have little control over the hosting space that they have - but I'm a Windows DEV, so this *nix stuff is all Greek to me. I found http://www.beyondsecurity.com/ which looks like it might do what I want (within their evaluation :) ) but I have a worry about how to find out if they are well known / honest - otherwise I will be tipping them a wink with a Domain Name that may be at risk! Many thanks.

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    Hi, We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • Problems with "Read Only" on a Samba share from Windows machines

    - by fistameeny
    We have a Ubuntu 10.04 Server that has a bunch of Samba shares on it that Windows workstations connect to. Each Windows workstation has a valid username/password to access the shares, which have restricted access governed by Samba. The problem we are experiencing is that Samba doesn't seem to be able to mimic the Windows way of handling "Read Only" attributes. Say I have two users, UserA and UserB, both a group called Staff - UserA creates a file that is readable/writeable by the group (ie. chmod rwxrwx---). If UserA then sets the "Read Only" flag, this changes the permissions to r-xr-x--- (i.e. no write for anyone). As UserB is in the same group as UserA, they should be able to remove the "Read Only" permission - however, they can't as Samba won't allow it. Is there a way to force Samba to allow users within the same group to remove the "Read Only" from a file not created by them? Edit: The Samba smb.conf is as follows: The share is defined in the smb.conf as: [global] log file = /var/log/samba/log.%m passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . obey pam restrictions = yes map to guest = bad user encrypt passwords = true passwd program = /usr/bin/passwd %u passdb backend = tdbsam dns proxy = no netbios name = ubsrv server string = ubsrv unix password sync = yes os level = 20 syslog = 0 usershare allow guests = yes panic action = /usr/share/samba/panic-action %d max log size = 1000 pam password change = yes workgroup = workgroup [Projects] valid users = @Staff writeable = yes user = @Staff create mode = 0777 path = /srv/samba/Projects directory mode = 0777 store dos attributes = Yes The folder itself looks like this: ls -l /srv/samba/ drwxrwxrwx 2 nobody Staff 4096 2010-11-04 10:09 Projects Thanks in advance, Matt

    Read the article

  • Can't get intel atom g-500 video driver to work with ubuntu 10.10 netbook edition.

    - by Matthew
    First of all I am completely new to Linux, so if you respond, please do so in a 'linux for dummies' tone so that my brain will be able to process it. I recently installed ubuntu on my dell mini-inspiron 1010. It has one GB of ram and an intel atom processor that uses the intel 500 graphic accelerator driver for windows and can run 1024x768 comfortably in xp. When I was installing ubuntu had quite a bit of trouble with my display and I am still unable to adjust my settings from 800x600x0x0 and there is no hardware acceleration. I visited the intel site and installed the linux drivers with the help of a friend but still no change. I tried adding resolution settings through xconf but they could not be applied even after I added the values. I am probably going about this totally wrong, but I've spent quite a lot of time browsing through forums and still haven't found a solution. Any help would be greatly appreciated. Also any other beginner tips that you have would be much appreciated. Thanks in advance, Matt

    Read the article

  • How do I run multiple MVC apps within a subdomain on IIS7?

    - by Matthew Patrick Cashatt
    Hello and thanks for looking. Background I am currently wrapping up a development contract and the client would like for me to push a build of the application to their IIS 7-based server in which they would like to run multiple MVC apps. One of the issues I have off of the bat is that this server is already a subdomain on their larger network. So, if I enter SERVERNAME in my browser, it automatically directs to SERVERNAME.COMPANYNAME.COM. Now, this is just fine if I place my application in the default website/root. In this scenario, clicking a link that requests admin.html directs to `SERVERNAME.COMPANYNAME.COM/admin.html' as usual. BUT they want me to place the app in a subdomain on this server so that they can also run other apps on the same server. So I assume that I need MYAPP.SERVERNAME.COMPANYNAME.COM but I have no idea how to do that. Complicating matters is that my app and the future ones they wish to install are all MVC based which intercepts and re-writes URLs. I assume that this takes care of itself if I can just successfully get my app into a subdomain to begin with. What I have tried Creating a new site on the server in it's own app pool Setting the binding for that site to MYAPP.SERVERNAME.COMPANYNAME.COM Setting the binding for that site to MYAPP Setting the binding for that site to MYAPP.SERVERNAME Setting the binding for that site to MYAPP.SERVERNAME.COM Setting the binding for that site to MYAPP.COMPANYNAME.COM Nothing is working. Am I missing something simple here? Thanks, Matt

    Read the article

  • Ubuntu 10.10 forgets desktop theme.

    - by Marcelo Cantos
    I am running Ubuntu in VirtualBox (on a Windows 7 host). Several times now, the top-level menu bar, the task bar — and seemingly every system dialog — have forgotten the out-of-the-box "Ambiance" theme they conform to when I first installed the system. Window captions still preserve the theme, but pretty much nothing else does. I have searched high and low on Google for assistance with this problem. Everything I've found suggests either running some gconf reset or deleting .gconf* .gnome* and other similar directories. I have followed all this advice and nothing works. I still get a boring Windows-95-style gray 3D look and feel. On previous occasions, after much messing around I've given up and rebooted the VM instance, and been pleasantly suprised to see the original "Ambience" theme restored throughout the UI, but invariably it disappears again some time later, usually after a reboot, so I can never figure out what I did that broke it. Here's a sample from Ubuntu's site of what I want it to look like. And here's a screenshot of my system as it currently looks. Also note that my GNOME Terminals normally have a nice purple semi-translucent look, and as can be seen from the screenshot, they are now just a solid matt white. This last time (just this morning), trying numerous combinations all the usual tricks and rebooting several times hasn't fixed it, so here I am on SU wondering: How do I recover the out-of-the-box theme for my Gnome/Ubuntu desktop, noting that blowing away all config files — as suggested in many places online — fails to achieve this? EDIT: It might help to know that it seems to fail either after I resize the VM instance, forcing the Ubuntu desktop to resize itself, or after I play around with Compiz settings. I haven't been able to figure out which of these it is, and it could be neither. Given the amount of pain I have had to go through to get things back to normal (and given that I am at a loss as to how to do so), it has proven difficult to definitively isolate the cause.

    Read the article

  • iptables blank after reboot

    - by theillien
    We've started encountering an issue with iptables on our RHEL 6.3 systems in that after a reboot, when the service starts, the rules are not loaded. We get the empty ruleset: [msnyder@matt-test ~]$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination This is in spite of the fact that we have rules defined and the service is, indeed, running. That I know because when I run service iptables start it simply drops back to the prompt. If I run service iptables restart it actually stops and then restarts the service. And, of course, if I run service iptables stop it indicates that iptables is actually stopping. Knowing that I need to restart the service, I do so and the rules load up properly. They simply don't get loaded after a reboot. Unless they get loaded differently during a reboot I don't see how our rules would be wrong. If they were, they wouldn't even load during a service restart. Has anyone else ever encountered this? EDIT: The rules are already saved in /etc/sysconfig/iptables. They are not added on the fly from the command line so service iptables save is unnecessary.

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • Bind to a method in WPF?

    - by Cameron MacFarland
    How do you bind to an objects method in this scenario in WPF? public class RootObject { public string Name { get; } public ObservableCollection<ChildObject> GetChildren() {...} } public class ChildObject { public string Name { get; } } XAML: <TreeView ItemsSource="some list of RootObjects"> <TreeView.Resources> <HierarchicalDataTemplate DataType="{x:Type data:RootObject}" ItemsSource="???"> <TextBlock Text="{Binding Path=Name}" /> </HierarchicalDataTemplate> <HierarchicalDataTemplate DataType="{x:Type data:ChildObject}"> <TextBlock Text="{Binding Path=Name}" /> </HierarchicalDataTemplate> </TreeView.Resources> </TreeView> Here I want to bind to the GetChildren method on each RootObject of the tree. EDIT Binding to an ObjectDataProvider doesn't seem to work because I'm binding to a list of items, and the ObjectDataProvider needs either a static method, or it creates it's own instance and uses that. For example, using Matt's answer I get: System.Windows.Data Error: 33 : ObjectDataProvider cannot create object; Type='RootObject'; Error='Wrong parameters for constructor.' System.Windows.Data Error: 34 : ObjectDataProvider: Failure trying to invoke method on type; Method='GetChildren'; Type='RootObject'; Error='The specified member cannot be invoked on target.' TargetException:'System.Reflection.TargetException: Non-static method requires a target.

    Read the article

  • Subdomain on different host

    - by mattsmith321
    Hi everyone! I'm trying to host a subdomain for my site with a different hosting company and I'm running into issues on how to set it up. Here are the specifics: - Domain is registered with GoDaddy. - Nameservers are pointing to DiscountASP.net where ASP.NET app has been happily running for couple of years. - Would like blog.mydomain.com to point to my account with DreamHost.com to take advantage of their LAMP stack. I have added blog.mydomain.com to DreamHost (after adding mydomain.com) via their control panel. I thought I would be able to add a subdomain entry on GoDaddy to point to DreamHost, but all they allow is blog.mydomain.com = new url. In theory I could just take our .biz or .net domain and host it on DreamHost but was hoping I could do it all with a subdomain. So, to summarize I'd like to know if what I want to do is feasible and if so, how do I go about it (given the constraints of GoDaddy, DiscountASP, & DreamHost). Thanks, Matt

    Read the article

  • PyGTK, Glade, Changing the window view and threads

    - by Gaunt Face
    Heya Everyone, Forgive me if this seems like a stupid question, just so far no where on the internet can I find someone offering a solution to this and I just wanted to get some feedback from someone with more experience than myself (I've only been using python, pyGTK and Glade for 2 days now). I have a UI window displaying and it updates with messages from a thread that is handling a bluetooth connection. This is fine and I have the application closing and running quite reliably, the problem is, after a bluetooth connection is made I wish to maintain the bluetooth thread (i.e. keep the connection going) but completely change the UI of the main window. Now the impression I am getting from pyGTK applications made from glade, is that the easiest thing to do is just open a new window. Is this really the best option? Can I cut the tree of widgets off at the root, maintaining the window widget but add on a new set of widgets from a separate glade file? If opening a new window is the best option, am I right in assuming that the bluetooth thread can be kept alive during this transition, providing I update any callbacks? Any help or pointers would be great. Cheers, Matt

    Read the article

  • Subversion - Do I need to reintegrate if I don't merge from trunk

    - by user314584
    Hi, I have read quite a bit about the need to re-integrate when you merge from a branch back to the trunk in SVN (This article was really helpful http://blogs.open.collab.net/svn/2008/07/subversion-merg.html). The problem seems to come from the fact that people are regularly updating the branch from the trunk which means that the final merge back is reflective. In my use-case, we want to create a release branch which will live for as long as it takes to stabilise the branch and fix any bugs. To maintain stability we don't want to merge up from the trunk but we do want to regularly merge fixes down from the release branch so that trunk gets all the bug fixes for free. We also don't want to wait until the end of QA to merge back to trunk. We therefore want to: 1.) Create the branch 2.) Make regular changes to the branch (and trunk) 3.) Merge back to trunk regularly (daily perhaps) Since we will never merge up from trunk I don't think that we need to worry about the problems that re-intergrating is designed to fix. Can anyone see a problem with this approach? Cheers, Matt

    Read the article

  • GAE modeling relationship options

    - by Sway
    Hi there, I need to model the following situation and I can't seem to find a consistent example on how to do it "correctly" for the google app engine. Suppose I've got a simple situation like the following: [Company] 1 ----- M [Stare] A company has one to many stores. Each store has an address made up of a address line 1, city, state, country, postcode etc. Ok. Lets say we need to create say an "Audit". An Audit is for a company and can be across one to many stares. So something like: [Audit] 1 ------ 1 [Company] 1 ------ M [Store] Now we need to query all of the "audits" based on the Store "addresses" in order to send the "Auditors" to the right locations. There seem to be numerous articles like this one: http://code.google.com/appengine/articles/modeling.html Which give examples of creating a "ContactCompany" model class. However they also say that you should use this kind of relationship only when you "really need to" and with "care" for performance. I've also read - frequently - that you should denormalize as much as possible thereby moving all of the "query-able" data into the Audit class. So what would you suggest as the best way to solve this? I've seen that there is an Expando class but I'm not sure if that is the "best" option for this. Any help or thoughts on this would be totally appreciated. Thanks in advance, Matt

    Read the article

  • Efficient alternative to merge() when building dataframe from json files with R?

    - by Bryan
    I have written the following code which works, but is painfully slow once I start executing it over thousands of records: require("RJSONIO") people_data <- data.frame(person_id=numeric(0)) json_data <- fromJSON(json_file) n_people <- length(json_data) for(lender in 1:n_people) { person_dataframe <- as.data.frame(t(unlist(json_data[[person]]))) people_data <- merge(people_data, person_dataframe, all=TRUE) } output_file <- paste("people_data",".csv") write.csv(people_data, file=output_file) I am attempting to build a unified data table from a series of json-formated files. The fromJSON() function reads in the data as lists of lists. Each element of the list is a person, which then contains a list of the attributes for that person. For example: [[1]] person_id name gender hair_color [[2]] person_id name location gender height [[...]] structure(list(person_id = "Amy123", name = "Amy", gender = "F", hair_color = "brown"), .Names = c("person_id", "name", "gender", "hair_color")) structure(list(person_id = "matt53", name = "Matt", location = structure(c(47231, "IN"), .Names = c("zip_code", "state")), gender = "M", height = 172), .Names = c("person_id", "name", "location", "gender", "height")) The end result of the code above is matrix where the columns are every person-attribute that appears in the structure above, and the rows are the relevant values for each person. As you can see though, some data is missing for some of the people, so I need to ensure those show up as NA and make sure things end up in the right columns. Further, location itself is a vector with two components: state and zip_code, meaning it needs to be flattened to location.state and location.zip_code before it can be merged with another person record; this is what I use unlist() for. I then keep the running master table in people_data. The above code works, but do you know of a more efficient way to accomplish what I'm trying to do? It appears the merge() is slowing this to a crawl... I have hundreds of files with hundreds of people in each file. Thanks! Bryan

    Read the article

  • ASP.NET MVC Map String Url To A Route Value Object

    - by mwgriffiths
    I am creating a modular ASP.NET MVC application using areas. In short, I have created a greedy route that captures all routes beginning with {application}/{*catchAll}. Here is the action: // get /application/index public ActionResult Index(string application, object catchAll) { // forward to partial request to return partial view ViewData["partialRequest"] = new PartialRequest(catchAll); // this gets called in the view page and uses a partial request class to return a partial view } Example: The Url "/Application/Accounts/LogOn" will then cause the Index action to pass "/Accounts/LogOn" into the PartialRequest, but as a string value. // partial request constructor public PartialRequest(object routeValues) { RouteValueDictionary = new RouteValueDictionary(routeValues); } In this case, the route value dictionary will not return any values for the routeData, whereas if I specify a route in the Index Action: ViewData["partialRequest"] = new PartialRequest(new { controller = "accounts", action = "logon" }); It works, and the routeData values contains a "controller" key and an "action" key; whereas before, the keys are empty, and therefore the rest of the class wont work. So my question is, how can I convert the "/Accounts/LogOn" in the catchAll to "new { controller = "accounts", action = "logon" }"?? If this is not clear, I will explain more! :) Matt This is the "closest" I have got, but it obviously wont work for complex routes: // split values into array var routeParts = catchAll.ToString().Split(new char[] { '/' }, StringSplitOptions.RemoveEmptyEntries); // feels like a hack catchAll = new { controller = routeParts[0], action = routeParts[1] };

    Read the article

  • Merge Multple Worksheets From Multple Workbooks

    - by Droter
    Hi, I have found multiple posts on merging data but I am still running into some problems. I have multiple files with multiple sheets. Example 2007-01.xls...2007-12.xls in each of these files are daily data on sheets labeled 01, 02, 03 ..... There are other sheets in the file so I can't just loop through all worksheets. I need to combine the daily data into monthly data, then all of the monthly data points into yearly. On the monthly data I need it to be added to the bottom of the page. I have added the file open changes for Excel 2007 Here is what I have so far: Sub RunCodeOnAllXLSFiles() Dim lCount As Long Dim wbResults As Workbook Dim wbMaster As Workbook Application. ScreenUpdating = False Application.DisplayAlerts = False Application.EnableEvents = False On Error Resume Next Set wbMaster = ThisWorkbook Dim oWbk As Workbook Dim sFil As String Dim sPath As String sPath = "C:\Users\test\" 'location of files ChDir sPath sFil = Dir("*.xls") 'change or add formats Do While sFil <> "" 'will start LOOP until all files in folder sPath have been looped through Set oWbk = Workbooks.Open(sPath & "\" & sFil) 'opens the file Set oWbk = Workbooks.Open(sPath & "\" & sFil) Sheets("01").Select ' HARD CODED FIRST DAY Range("B6:F101").Select 'AREA I NEED TO COPY Range("B6:F101").Copy wbMaster.Activate Workbooks("wbMaster").ActiveSheet.Range("B65536").End(xlUp)(2).PasteSpecial Paste:=xlValues Application.CutCopyMode = False oWbk.Close True 'close the workbook, saving changes sFil = Dir Loop ' End of LOOP On Error Goto 0 Application.ScreenUpdating = True Application.DisplayAlerts = True Application.EnableEvents = True End Sub Right now it can find the files and open them up and get to the right worksheet but when it tries to copy the data nothing is copied over. Thanks for your help, Matt

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81  | Next Page >