Search Results

Search found 1163 results on 47 pages for 'jeff o'.

Page 4/47 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Why does the ASP.Net Web Forms model "suck"?

    - by Daniel Magliola
    I've heard Jeff Atwood, Joel Spolsky, and many other legendary people talk about how the ASP.Net Web Forms model sucks. (So this question is kind of directed to them, hopefully Jeff is reading) Now, I highly respect their opinion, given their background and expertise, but truth be told, I absolutely LOVE ASP.Net. I think the model is brilliant, and it sucks if you have no idea what you're doing, but once you understand how to control ViewState, when to use handlers instead of pages, etc, it is generations ahead of all the other models. So every time I hear someone complain about how it sucks, I can't help ask the same question... Why? What is it that's so bad about it? I appreciate all opinions. I'm assuming there's probably a post at Jeff's blog talking about this too...

    Read the article

  • JNI unsigned char to byte array

    - by Jeff Storey
    I'm working with a C++ library that stores image byte data in an array of unsigned characters. My jni function returns a jByteArray (which then gets converted to a BufferedImage on the java side), but I'm not sure how to fill the jByteArray from the unsigned character array (if it is possible). Can anyone provide a snippet for this last part to basically do this: // size is the size of the unsigned char array const int size = 100; unsigned char* buf = new unsigned char[size]; // buf gets passed to another library here to be populated jbyteArray bArray = env->NewByteArray(size); // now how do I get the data from buf to bArray? Thanks, Jeff

    Read the article

  • ItemFileWriteStore: how to change the data?

    - by jeff porter
    Hi, I'd like to change the data in my dojo.data.ItemFileWriteStore. Currently I have... var rawdataCacheItems = [{'cachereq':'14:52:03','name':'Test1','cacheID':'3','ver':'7'}]; var cacheInfo = new dojo.data.ItemFileWriteStore({ data: { identifier: 'cacheID', label: 'cacheID', items: rawdataCacheItems } }); I'd like to make a XHR request to get a new JSON string of the data to render. What I can't work out is how to change the data in the "items" held by ItemFileWriteStore. Can anyone point me in the correct direction? Thanks Jeff Porter

    Read the article

  • groovy call private method in Java super class

    - by Jeff Storey
    I have an abstract Java class MyAbstractClass with a private method. There is a concrete implementation MyConcreteClass. public class MyAbstractClass { private void somePrivateMethod(); } public class MyConcreteClass extends MyAbstractClass { // implementation details } In my groovy test class I have class MyAbstractClassTest { void myTestMethod() { MyAbstractClass mac = new MyConcreteClass() mac.somePrivateMethod() } } I get an error that there is no such method signature for somePrivateMethod. I know groovy can call private methods but I'm guessing the problem is that the private method is in the super class, not MyConcreteClass. Is there a way to invoke a private method in the super class like this (other than using something like PrivateAccessor)? thanks Jeff

    Read the article

  • Eclipse Java Profiler

    - by Jeff Storey
    I know this has been asked before, but I have not found anything recent that really gives a good answer. I'm trying to find a free profiler for eclipse that works well. I would like a graphical breakdown of execution time in particular. I've tried TPTP but have had no luck at all with GUI apps (it took almost a minute for a GUI app to start and was virtually unusable on screen - it uses a lot of Java OpenGL, so I'm not sure if it has to do with that). I liked YourKit, but unfortunately it's not free. I even tried switching to NetBeans since they have a built in profiler. If anyone has had success with particular profilers (even if it was TPTP), I'd like to hear about it. Any recommendations would be greatly appreciated. thanks, Jeff

    Read the article

  • Application Server or Lightweight Container?

    - by Jeff Storey
    Let me preface this by saying this is not an actual situation of mine but I'm asking this question more for my own knowledge and to get other people's inputs here. I've used both Spring and EJB3/JBoss, and for the smaller types of applications I've built, Spring (+Tomcat when needed) has been much simpler to use. However, when scaling up to larger applications that require things like load balancing and clustering, is Spring still a viable solution? Or is it time to turn to a solution like EJB3/JBoss when you start to get big enough to need that? I'm not sure if I've scoped the problem well enough to get a good answer, so please let me know. Thanks, Jeff

    Read the article

  • Nibernate, DynamicProxy, and Spring AOP

    - by jeff
    We have an Spring IOC managed application that uses NHibernate in its persistence layer. We have use the Spring AOP and understand its terminology and capabilities. We have some investment in Spring proxies. Now, we want to add a PropertyChangedMixin and a ValidatorInterceptor (not nhibernate validator, but based on Spring validation) onto our NHibernate managed objects. I've looked at the hooks for NHiberate IInterceptor and EventListeners and that gives me a place to apply the desired proxies. If I use the Spring proxies is it going to play nice with the existing nhibernate proxies. We don't lazy load. From the simple nhibernate stuff the benefits of DynamicProxy look appealing. I can go either way, but I'd like to hear suggestions. Thanks, jeff

    Read the article

  • JavaEE Application Server or Lightweight Container?

    - by Jeff Storey
    Let me preface this by saying this is not an actual situation of mine but I'm asking this question more for my own knowledge and to get other people's inputs here. I've used both Spring and EJB3/JBoss, and for the smaller types of applications I've built, Spring (+Tomcat when needed) has been much simpler to use. However, when scaling up to larger applications that require things like load balancing and clustering, is Spring still a viable solution? Or is it time to turn to a solution like EJB3/JBoss when you start to get big enough to need that? I'm not sure if I've scoped the problem well enough to get a good answer, so please let me know. Thanks, Jeff

    Read the article

  • Perl Regex Output only characters that can be used as unix filename

    - by Jeff Balinsky
    I wrote a basic mp3 organizing script for myself. I know the power of regex but I suck with the syntax I have the line $outname = "/home/jebsky/safehouse/music/mp3/" . $inital . "/" . $artist . "/" . $year ." - ". $album . "/" . $track ." - ". $artist ." - ". $title . ".mp3"; All I want is a regex to change $outname so that any non safe for filename characters get replaced by an underscore Thanks Jeff

    Read the article

  • How to customize OOTB workflow emails

    - by Jeff
    How can I make simple format-type customizations to ALL OOTB workflow related emails? I have found that many pre-Sharepoint 2010 posts indicated that OOTB workflow emails are in fact 'alerts', and therefore OOTB workflow emails could be customized using the same technique which is: making a customized version of alerttemplates.xml and even using IAlertNotifyHandler to intercept all alert emails. However, it seems that OOTB workflow and workflow task emails are not affected by changes to my customalerttemplates.xml file (which I do follow with stsadm updatealerttemplates, iisreset, and timer service restart). This is what I used as a guide to customize alerts: http://blogs.msdn.com/b/sharepointdeveloperdocs/archive/2007/12/14/how-to-customizing-alert-emails-using-ialertnotificationhandler.aspx What am I missing? Is there a separate template for workflow emails? Can OOTB workflow emails be customized? Thanks! Jeff

    Read the article

  • Grails/Spring HttpServletRequest synchronization

    - by Jeff Storey
    I was writing a simple Grails app and I have a spot in a gsp where one of my java beans in modified. <g:each in="${myList}" status="i" var="myVar"> // if the user performs some view action, update one of the myVar elements </g:each> This works, but I don't think it's quite threadsafe. myList is an http request variable but in cases of pages that use ajax (or other client side manipulations), it is possible for two threads to be modifying the same request scope variable The Spring AbstractController class provides a setSynchronizeOnSession method. Does grails provide any equivalent functionality? If not, what's the best way to protect this non-threadsafe mutation? thanks, Jeff

    Read the article

  • How do I chain forms in Access? (pass values between them)

    - by jeff porter
    Hello, I'm using Access 2007 and have a data model like this... Passenger - Bookings - Destinations So 1 Passenger can have Many Bookings, each for 1 Destinations. My problem... I can create a form to allow the entry of Passenger details, but I then want to add a NEXT button to take me to a form to enter the details of the Booking (i.e. just a simple drop list of the Destinations). I've added the NEXT button and it has the events of RunCommand SaveRecord OpenForm Destination_form BUT, I cant work out how to pass accross to the new form the primary key of the passenger that was just entered (PassengerID). I'd really like to have just one form, and that allow the entry of the Passenger details and the selection of a Destination, that then creates the entries in the 2 Tables (Passenger & Bookings), but I can't get that to work either. Can anyone help me out please? Thanks Jeff Porter

    Read the article

  • Replacing Resource files on new BlackBerry app version

    - by Jeff
    Hello, I am maintaining an existing BlackBerry application (implemented as a MIDlet). The application contains a number of data files that get bundled with the app as resources. Some of these data files need to be updated for a new version of the app. When the user goes to install a new version of the application (via URL of Jad file), it prompts them with the following message "Persistent data exists for the application. Would you like to retain this data? " If the user selects "Yes", it looks like the app continues to use the old resource files. This is so surprising to me. First of all, am I losing my mind or will an upgrade really not overwrite existing resource files? Is there any way I can force it to? Thanks, Jeff

    Read the article

  • Delphi VirtualStringTree - Check for Duplicates?

    - by Jeff
    Hello S.O! Yeah, I know I post a lot of questions, but thats because I either need assurance that I am doing it right, what I am doing wrong, or if I am totally clueless, and cant find anything in the documentation. Anyways, I am trying to check for duplicate nodes. Here is how I would want to do it: Loop thru my nodes, and compare each single node's text (record), but if I got many nodes, wouldnt that be too time and memory consuming? Would there be a better approach for this? Thanks! - Jeff.

    Read the article

  • Create a buffered image from rgb pixel values

    - by Jeff Storey
    I have an integer array of RGB pixels that looks something like: pixels[0] = <rgb-value of pixel(0,0)> pixels[1] = <rgb-value of pixel(1,0)> pixels[2] = <rgb-value of pixel(2,0)> pixels[3] = <rgb-value of pixel(0,1)> ...etc... And I'm trying to create a BufferedImage from it. I tried the following: BufferedImage img = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB); img.getRaster().setPixels(0, 0, width, height, pixels); But the resulting image has problems with the color bands. The image is unclear and there are diagonal and horizontal lines through it. What is the proper way to initialize the image with the rgb values? thanks, Jeff

    Read the article

  • creating a wordpress dev enviornment and uploading to production

    - by Jeff
    I am an old school java developer who is considering a using wordpress. I'm used to developing locally on my PC (yeah yeah not even a mac) and then ftping my files up to a production environment on a remote server. My high level review of wordpress gives me the impression that typically there is no concept of lower environments and that all updates occur directly in production. Is this the case? If not, can someone explain how one goes about uploading the files to a web site? Thanks, Jeff

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • SharePoint 2010 – SQL Server has an unsupported version 10.0.2531.0

    - by Jeff Widmer
    I am trying to perform a database attach upgrade to SharePoint Foundation 2010. At this point I am trying to attach the content database to a Web application by using Windows Powershell: Mount-SPContentDatabase -Name <DatabaseName> -DatabaseServer <ServerName> -WebApplication <URL> [-Updateuserexperience] I am following the directions from this TechNet article: Attach databases and upgrade to SharePoint Foundation 2010.  When I go to mount the content database I am receiving this error: Mount-SPContentDatabase : Could not connect to [DATABASE_SERVER] using integrated security: SQL server at [DATABASE_SERVER] has an unsupported version 10.0.2531.0. Please refer to “http://go.microsoft.com/fwlink/?LinkId=165761” for information on the minimum required SQL Server versions and how to download them. At first this did not make sense because the default SharePoint Foundation 2010 website was running just fine.  But then I realized that the default SharePoint Foundation site runs off of SQL Server Express and that I had just installed SQL Server Web Edition (since the database is greater than 4GB) and restored the database to this version of SQL Server. Checking the documentation link above I see that SharePoint Server 2010 requires a 64-bit edition of SQL Server with the minimum required SQL Server versions as follows: SQL Server 2008 Express Edition Service Pack 1, version number 10.0.2531 SQL Server 2005 Service Pack 3 cumulative update package 3, version number 9.00.4220.00 SQL Server 2008 Service Pack 1 cumulative update package 2, version number 10.00.2714.00 The version of SQL Server 2008 Web Edition with Service Pack 1 (the version I installed on this machine) is 10.0.2531.0. SELECT @@VERSION: Microsoft SQL Server 2008 (SP1) - 10.0.2531.0 (X64)   Mar 29 2009 10:11:52   Copyright (c) 1988-2008 Microsoft Corporation  Web Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (VM) But I had to read the article several times since the minimum version number for SQL Server Express is 10.0.2531.0.  At first I thought I was good with the version of SQL Server 2008 Web that I had installed, also 10.0.2531.0.  But then I read further to see that there is a cumulative update (hotfix) for SQL Server 2008 SP1 (NOT the Express edition) that is required for SharePoint 2010 and will bump the version number to 10.0.2714.00. So the solution was to install the Cumulative update package 2 for SQL Server 2008 Service Pack 1 on my SQL Server 2008 Web Edition to allow SharePoint 2010 to work with SQL Server 2008 (other than the SQL Server 2008 Express version). SELECT @@VERSION (After installing Cumulative update package 2): Microsoft SQL Server 2008 (SP1) - 10.0.2714.0 (X64)   May 14 2009 16:08:52   Copyright (c) 1988-2008 Microsoft Corporation  Web Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (VM)

    Read the article

  • VB.NET IF() Coalesce and “Expression Expected” Error

    - by Jeff Widmer
    I am trying to use the equivalent of the C# “??” operator in some VB.NET code that I am working in. This StackOverflow article for “Is there a VB.NET equivalent for C#'s ?? operator?” explains the VB.NET IF() statement syntax which is exactly what I am looking for... and I thought I was going to be done pretty quickly and could move on. But after implementing the IF() statement in my code I started to receive this error: Compiler Error Message: BC30201: Expression expected. And no matter how I tried using the “IF()” statement, whenever I tried to visit the aspx page that I was working on I received the same error. This other StackOverflow article Using VB.NET If vs. IIf in binding/rendering expression indicated that the VB.NET IF() operator was not available until VS2008 or .NET Framework 3.5.  So I checked the Web Application project properties but it was targeting the .NET Framework 3.5: So I was still not understanding what was going on, but then I noticed the version information in the detailed compiler output of the error page: This happened to be a C# project, but with an ASPX page with inline VB.NET code (yes, it is strange to have that but that is the project I am working on).  So even though the project file was targeting the .NET Framework 3.5, the ASPX page was being compiled using the .NET Framework 2.0.  But why?  Where does this get set?  How does ASP.NET know which version of the compiler to use for the inline code? For this I turned to the web.config.  Here is the system.codedom/compilers section that was in the web.config for this project: <system.codedom>     <compilers>         <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">             <providerOption name="CompilerVersion" value="v3.5" />             <providerOption name="WarnAsError" value="false" />         </compiler>     </compilers> </system.codedom> Keep in mind that this is a C# web application project file but my aspx file has inline VB.NET code.  The web.config does not have any information for how to compile for VB.NET so it defaults to .NET 2.0 (instead of 3.5 which is what I need). So the web.config needed to include the VB.NET compiler option.  Here it is with both the C# and VB.NET options (I copied the VB.NET config from a new VB.NET Web Application project file).     <system.codedom>         <compilers>             <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">                 <providerOption name="CompilerVersion" value="v3.5" />                 <providerOption name="WarnAsError" value="false" />             </compiler>       <compiler language="vb;vbs;visualbasic;vbscript" extension=".vb" warningLevel="4" type="Microsoft.VisualBasic.VBCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">         <providerOption name="CompilerVersion" value="v3.5"/>         <providerOption name="OptionInfer" value="true"/>         <providerOption name="WarnAsError" value="false"/>       </compiler>     </compilers>     </system.codedom>   So the inline VB.NET code on my aspx page was being compiled using the .NET Framework 2.0 when it really needed to be compiled with the .NET Framework 3.5 compiler in order to take advantage of the VB.NET IF() coalesce statement.  Without the VB.NET web.config compiler option, the default is to compile using the .NET Framework 2.0 and the VB.NET IF() coalesce statement does not exist (at least in the form that I want it in).  FYI, there is an older IF statement in VB.NET 2.0 compiler which is why it is giving me the unusual “Expression Expected” error message – see this article for when VB.NET got the new updated version. EDIT (2011-06-20): I had made a wrong assumption in the first version of this blog post.  After a little more research and investigation I was able to figure out that the issue was in the web.config and not with the IIS App Pool.  Thanks to the comment from James which forced me to look into this again.

    Read the article

  • Difference between 12.04 and 12.04.1

    - by Jeff
    I recently did a fresh install of Ubuntu 12.04 two days ago. Or at least I thought it was 12.04, but actually 12.04.1. Now I'm having errors popping up from the grub loader. Error: no video mode activated which was apparently resolved in this bug# 699802. However these workarounds are for 11.xx and not working for me. I never had these errors before with 12.04 and now I'm getting them. What's the difference between 12.04 and 12.04.1? Off the bat I notice that the kernels are different 12.04 uses 3.2.0-26-generic-pae 12.04.1 uses 3.2.0-29-generic after an immediate sudo apt-get update upgrade 12.04.1 uses 3.2.0-30-generic I have two other computers running 12.04 (not 12.04.1) and they're working fine. The computer that I'm currently was working fine (with 12.04) previously too. Should I roll back my kernel to 3.2.0-26?

    Read the article

  • How do I prevent a website being misclassified by Websense?

    - by Jeff Atwood
    I received the following email from a user of one of our websites: This morning I tried to log into example.com and I was blocked by Websense at work because it is considered a "social networking" site or something. I assume the websense filter is maintained by a central location, so I'm hoping that by letting you guys know you can get it unblocked. per Wikipedia, Websense is web filtering or Internet content-control software. This means one (or more) of our sites is being miscategorized by Websense as "social networking" and thus disallowed for access at any workplace that uses Websense to control what websites their users can and cannot access during work hours. (I know, they are monsters!) How do we dispute this Websense classification error, as our websites should generally be considered "information technology" and never "social networking"? How do we know what category Websense has put our sites in, so we can pro-actively make sure they're not wrong?

    Read the article

  • Blogging: MacJournal & Windows Live Writer

    - by Jeff Julian
    One thing I have learned about using a Mac is that Apple does not produce very many free applications. The ones they do are typically not full featured and to get the full feature you need to buy their upgraded version. For example, when it comes to Photo editing and cataloging, iPhoto is not a solution for large files or RAW processing, you need Aperture which is a couple hundred dollars. I am not complaining because I like it when an application has a product team who generates revenue with it, because the chance of them being around longer seems to be higher. What is my point in all of this? Apple does not produce a product for blogging/journaling like Microsoft does with Windows Live Writer. I love Windows Live Writer. If you are on a Windows box, it is a required tool in your toolbox if you publish to a blog. The cleanness of the interface, integration with most blog APIs and ability to Save Local or Publish as a Draft make capturing your thoughts for publishing now or later a very easy task. My hope is that Microsoft will port it to the Mac, but I don’t believe that will ever happen as it is not a revenue generating product and Microsoft doesn’t often port to a Mac besides Remote Desktop Connection and MSN Messenger. For my configuration I used to use only Boot Camp on my two MacBook Pros I have owned in the past three years because I’m a PC, but after four different rebuilds (not typically due to Windows, but Boot Camp or Parallels) I decided to move off the Boot Camp platform and to VMWare Fusion. This is a complete separate blog post that I should spec out in MacJournal, but I now always boot into the Mac OS and use Fusion for my AJI Software VM or my client’s VMs. It just seems to work better for me and I have a very nice way to backup my Windows environments with VMWare.Needless to say, there was need in my new laptop configuration for a blogging tool that worked natively on a Mac. I don’t like to power up my machine for writing a document or working on an image and need to boot up a VM just so I can use Windows. Some would say why not just use a Windows laptop and put the MBP on eBay? It is just a preference and right now, I like the Mac OS for day to day work. So in comes MacJournal, part of the current MacHeist package for $19.95 (MacJournal is normally $39.95). This product is definitely not WLW, but WLW is missing some features I like in MacJournal. I hope the price point comes down on MacJournal cause I could see paying $19.95 for it, but it is always hard for me to buy a piece of software for $39.95 when I can use something else. But I am a cheapskate when it comes to software packages. I suggest if you are using a Mac to drop what you are doing pick up the MacHeist bundle today before it is over, but if you are reading this later, than download the trial and see if MacJournal is a solution for you. If you have any other suggestions that are as nice or cheaper, please comment.Product LinksMacJournal by Mariners Software $39.95 (part of MacHeist bundle for $19.95 with only one day left)Windows Live Writer by MicrosoftThis post was created using MacJournal.[Update: The joys of formatting. Make sure if you are a Geekswithblogs.net member that you use this configuration to setup the Metablog formatting of paragraphs correctly]

    Read the article

  • How to configure XRDP to start cinnamon as default desktop session

    - by Jeff
    I was wondering if there is a way to make Cinnamon 1.4 the default environment upon logging in to Ubuntu 12.04. I can install Cinnamon 1.4 without any problems, but I am trying to run XRDP to log in from a Windows machine and would like it to start "Cinnamon session" instead of a Unity session by default. The question is, How can I tell XRDP to use Cinnamon instead of Unity upon logging in? XRDP seems to work much better than any VNC based servers.

    Read the article

  • Redaction in AutoVue

    - by [email protected]
    As the trend to digitize all paper assets continues, so does the push to digitize all the processes around these assets. One such process is redaction - removing sensitive or classified information from documents. While for some this may conjure up thoughts of old CIA documents filled with nothing but blacked out pages, there are actually many uses for redaction today beyond military and government. Many companies have a need to remove names, phone numbers, social security numbers, credit card numbers, etc. from documents that are being scanned in and/or released to the public or less privileged users - insurance companies, banks and legal firms are a few examples. The process of digital redaction actually isn't that far from the old paper method: Step 1. Find a folder with a big red stamp on it labeled "TOP SECRET" Step 2. Make a copy of that document, since some folks still need to access the original contents Step 3. Black out the text or pages you want to hide Step 4. Release or distribute this new 'redacted' copy So where does a solution like AutoVue come in? Well, we've really been doing all of these things for years! 1. With AutoVue's VueLink integration and iSDK, we can integrate to virtually any content management system and view documents of almost any format with a single click. Finding the document and opening it in AutoVue: CHECK! 2. With AutoVue's markup capabilities, adding filled boxes (or other shapes) around certain text is a no-brainer. You can even leverage AutoVue's powerful APIs to automate the addition of markups over certain text or pre-defined regions using our APIs. Black out the text you want to hide: CHECK! 3. With AutoVue's conversion capabilities, you can 'burn-in' the comments into a new file, either as a TIFF, JPEG or PDF document. Burning-in the redactions avoids slip-ups like the recent (well-publicized) TSA one. Through our tight integrations, the newly created copies can be directly checked into the content management system with no manual intervention. Make a copy of that document: CHECK! 4. Again, leveraging AutoVue's integrations, we can now define rules in the system based on a user's privileges. An 'authorized' user wishing to view the document from the repository will get exactly that - no redactions. An 'unauthorized' user, when requesting to view that same document, can get redirected to open the redacted copy of the same document. Release or distribute the new 'redacted' copy: CHECK! See this movie (WMV format, 2mins, 20secs, no audio) for a quick illustration of AutoVue's redaction capabilities. It shows how redactions can be added based on text searches, manual input or pre-defined templates/regions. Let us know what you think in the comments. And remember - this is all in our flagship AutoVue product - no additional software required!

    Read the article

  • Resolving TFS_SCHEMA_VERSION Errors In Team Foundation Server 2010 Collection Databases

    - by Jeff Ferguson
    I recently backed up a Team Foundation Server 2010 project collection database and restored it onto another server. All of that went well, until I tried to use the restored database on the new server. As it turns out, the old server was running the Release Candidate of TFS 2010 and the new server is running the RTM version of TFS 2010. I ended up with an error message shown on the new server's Team Web Access site about the project collection's TFS_SCHEMA_VERSION property not containing the appropriate value. As it turns out, TFS_SCHEMA_VERSION is an extended property on the project collection database. I ran the following SQL script against the project collection database restored onto the new server: EXEC [Tfs_DefaultCollection].sys.sp_dropextendedproperty @name=N'TFS_PRODUCT_VERSION' GO EXEC [Tfs_DefaultCollection].sys.sp_addextendedproperty @name=N'TFS_PRODUCT_VERSION', @value=N'10.0.30319.1' GO EXEC [Tfs_DefaultCollection].sys.sp_dropextendedproperty @name=N'TFS_SCHEMA_VERSION' GO EXEC [Tfs_DefaultCollection].sys.sp_addextendedproperty @name=N'TFS_SCHEMA_VERSION', @value=N'Microsoft Team Foundation Server 2010 (RTM)' GO Now, all is well. I can now navigate to http://newserver:8080/tfs/ and see the restored project collection and its contents.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >