Search Results

Search found 35094 results on 1404 pages for 'post build'.

Page 236/1404 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • FreeBSD Ports: How can I see all dependencies for a port, and all subdependencies for those dependencies?

    - by Stefan Lasiewski
    I'm trying to build a port which depends on apache-ant. I thought I could run make build-depends-list to see all dependencies required by this port: # make build-depends-list /usr/ports/devel/apache-ant /usr/ports/java/jdk16 /usr/ports/math/gmp But after installing everything, the port had a dependency list which was a mile long: apache-ant-1.8.1 desktop-file-utils-0.15_2 gamin-0.1.10_4 gettext-0.18.1.1 gio-fam-backend-2.26.1 glib-2.26.1_1 gmp-5.0.1 inputproto-2.0 javavmwrapper-2.3.5 kbproto-1.0.4 libX11-1.3.3_1,1 libXau-1.0.5 libXdmcp-1.0.3 libXext-1.1.1,1 libXi-1.3,1 libXtst-1.1.0 libiconv-1.13.1_1 libpthread-stubs-0.3_3 libxcb-1.7 pcre-8.12 perl-5.10.1_3 pkg-config-0.25_1 python26-2.6.6 recordproto-1.14 unzip-6.0 xextproto-7.1.1 xproto How can I see all dependencies, and all subdependencies for a port?

    Read the article

  • How do I cut and paste commands from your blog?

    - by Maria Colgan
    At the recent ODTUG  Kscope 12 conference several people told me that they really enjoyed our blog on the Optimizer but were frustrated because they couldn’t cut and paste the commands used in the blog posts straight into their environment. Typically I use screen shots in the blog posts to make the commands clear but it does mean that it is impossible to cut and paste the commands into your environment. In order to get around this I have created a downloadable .sql script for each of our blog posts. You should now see the sentence “You can get a copy of the script I used to generate this post here”, appearing at the bottom of each blog post. Clicking on the link will open the .sql script that contains all of the commands used in the post. You can either save the entire script or just cut and paste the particular command you are interested in! I have added scripts for all of this year’s blog posts and am slowly making my way through our old posts until we have a script for everything we have posted to date. Hopefully this will help! +Maria Colgan

    Read the article

  • Why the recent shift to removing/omitting semicolons from Javascript?

    - by Jonathan
    It seems to be fashionable recently to omit semicolons from Javascript. There was a blog post a few years ago emphasising that in Javascript, semicolons are optional and the gist of the post seemed to be that you shouldn't bother with them because they're unnecessary. The post, widely cited, doesn't give any compelling reasons not to use them, just that leaving them out has few side-effects. Even GitHub has jumped on the no-semicolon bandwagon, requiring their omission in any internally-developed code, and a recent commit to the zepto.js project by its maintainer has removed all semicolons from the codebase. His chief justifications were: it's a matter of preference for his team; less typing Are there other good reasons to leave them out? Frankly I can see no reason to omit them, and certainly no reason to go back over code to erase them. It also goes against (years of) recommended practice, which I don't really buy the "cargo cult" argument for. So, why all the recent semicolon-hate? Is there a shortage looming? Or is this just the latest Javascript fad?

    Read the article

  • Great Example of a Simple Cost-Benefit Analysis

    - by BuckWoody
    I saw a post the other day that you should definitely go check out. It’s a cost/benefit decision, and although the author gives it a quick treatment and doesn’t take all points in the decision into account, you should focus on the process he follows. It’s a quick and simple example of the kind of thought process we should have as data professionals when we pick a server, a process, or application and even platform software. The key is to include more than just the price of a piece of software or hardware. You need to think about the “other” costs in the decision, and then make the right one. Sometimes the cheapest option is the cheapest, and other times, well, it isn’t. I’ve seen this played out not only in the decision to go with a certain selection, but in the options or editions it comes in. You have to put all of the decision points in the analysis to come up with the right answer, and you have to be able to explain your logic to your team and your company. This is the way you become a data professional, not just a DBA. You can check out the post here – it deals with Azure, but the point is the process, not Azure itself: http://blogs.msdn.com/eugeniop/archive/2010/03/19/windows-azure-guidance-a-simplistic-economic-analysis-of-a-expense-migration.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • [Kubuntu 14.04][Eclipse] (ADT) crashes at button OK from Project properties

    - by nouseforname
    Since i upgraded to kubuntu 14.04, my Eclipse crashes at different situations. Mostly i can "simulate" it when going to project properties and press ok. Then it always crashes. My system: DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04 DISTRIB_CODENAME=trusty DISTRIB_DESCRIPTION="Ubuntu 14.04.1 LTS" My Java: java version "1.8.0_05" Java(TM) SE Runtime Environment (build 1.8.0_05-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode) My ADT Version: Android Development Toolkit Version: 23.0.0.1245622 I already tried to add this in adt-bundle-linux-x86_64/eclipse/configuration/configuration.ini org.eclipse.swt.browser.DefaultType=mozilla -Dorg.eclipse.swt.browser.DefaultType=mozilla Error: # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007fe049eb1718, pid=5964, tid=140601811232512 # # JRE version: Java(TM) SE Runtime Environment (8.0_05-b13) (build 1.8.0_05-b13) # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.5-b02 mixed mode linux-amd64 compressed oops) # Problematic frame: # C [libgobject-2.0.so.0+0x19718] g_object_get_qdata+0x18 # # Core dump written. Default location: /home/maddin/core or core.5964 # # An error report file with more information is saved as: # /home/maddin/hs_err_pid5964.log Compiled method (nm) 28866 4166 n 0 org.eclipse.swt.internal.gtk.OS::_g_object_get_qdata (native) total in heap [0x00007fe051da6790,0x00007fe051da6af0] = 864 relocation [0x00007fe051da68b0,0x00007fe051da68f8] = 72 main code [0x00007fe051da6900,0x00007fe051da6ae8] = 488 oops [0x00007fe051da6ae8,0x00007fe051da6af0] = 8 # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # Now, as soon as i change SystemSettings - Application Apperance - GTK - GTKn-Design to something else but "oxygen-gtk" this crash doesn't happen anymore. But the application appearance also is ugly. Beside that i get a lot of errors/warnings like that: (SWT:6148): GLib-GObject-CRITICAL **: g_closure_add_invalidate_notifier: assertion 'closure->n_inotifiers < CLOSURE_MAX_N_INOTIFIERS' failed or other GTK warnings from the particular design, not having theme-engine. Which actually doesn't cause any crahs it seems so far. So i have 3 options: accept crashes accept warnings (maybe the best choice) accept ugly design What can i do to solve this issue without changing the design settings?

    Read the article

  • Using wget to download pdf files from a site that requires cookies to be set

    - by matt74tm
    I want to access a newspaper site and then download their epaper copies (in PDF). The site requires me to login using my email address and password and then it permits me to access those PDF URLs. I'm having trouble 'setting my session' in wget. When I login into the site from my browser, it sets two cookie values: [email protected] Password=12345 I tried: wget --post-data "[email protected]&Password=12345" http://epaper.abc.com/login.aspx However, that just downloaded the login page and saved it locally The FORM on the login page has two fields: txtUserID txtPassword and radiobuttons like this: <input id="rbtnManchester" type="radio" checked="checked" name="txtpub" value="44"> Another button: <input id="rbtnLondon" type="radio" name="txtpub" value="64"> If I post this to the login.aspx page, I get the same output wget --post-data "[email protected]&txtPassword=12345&txtpub=44" http://epaper.abc.com/login.aspx If I do: --save-cookies abc_cookies.txt it doesnt seem to have anything other than the default content. For the last if I do --debug as well it says: ... Set-Cookie: ASP.NET_SessionId=05kphcn4hjmblq45qgnjoe41; path=/; HttpOnly ... Stored cookie epaper.abc.com -1 (ANY) / <session> <insecure> [expiry none] ASP.NET_SessionId 05kphcn4hjmblq45qgnjoe41 Length: 107253 (105K) [text/html] Saving to: `login.aspx' ... Saving cookies to abc_cookies.txt. However, abc_cookies.txt shows ONLY the following: # HTTP cookie file. # Generated by Wget on 2011-08-16 08:03:05. # Edit at your own risk. (Not sure why I'm not getting any responses on SO - perhaps SU is a better forum - http://stackoverflow.com/questions/7064171/using-wget-to-download-pdf-files-from-a-site-that-requires-cookies-to-be-set) EDIT 1 C:\Temp>wget --cookies=on --keep-session-cookies --save-cookies abc_cookies.txt --post-data "txtUserID=abc%40gmail.com&txtPassword=password&txtpub=44&chkbox=checkbox&submit.x=48&submit.y=7" http://epaper.abc.com/login.aspx --debug SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc DEBUG output created by Wget 1.11.4 on Windows-MinGW. --2011-08-18 08:15:59-- http://epaper.abc.com/login.aspx Resolving epaper.abc.com... seconds 0.00, 999.999.99.99 Caching epaper.abc.com => 999.999.99.99 Connecting to epaper.abc.com|999.999.99.99|:80... seconds 0.00, connected. Created socket 300. Releasing 0x00a2ae80 (new refcount 1). ---request begin--- POST /login.aspx HTTP/1.0 User-Agent: Wget/1.11.4 Accept: */* Host: epaper.abc.com Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 100 ---request end--- [POST data: txtUserID=abc%40gmail.com&txtPassword=password&txtpub=44&chkbox=checkbox&submit.x=48&submit.y=7] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Connection: keep-alive Date: Thu, 18 Aug 2011 02:46:17 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 Set-Cookie: ASP.NET_SessionId=owcrje55yl45kgmhn43gq145; path=/; HttpOnly Cache-Control: private Content-Type: text/html; charset=utf-8 Content-Length: 107253 ---response end--- 200 OK Registered socket 300 for persistent reuse. Stored cookie epaper.abc.com -1 (ANY) / <session> <insecure> [expiry none] ASP.NET_SessionId owcrje55yl45kgmhn43gq145 Length: 107253 (105K) [text/html] Saving to: `login.aspx.1' 100%[======================================================================================================================>] 107,253 24.9K/s in 4.2s 2011-08-18 08:16:05 (24.9 KB/s) - `login.aspx.1' saved [107253/107253] Saving cookies to abc_cookies.txt. Done saving cookies. C:\Temp>wget --referer=http://epaper.abc.com/login.aspx --cookies=on --load-cookies abc_cookies.txt --keep-session-cookies --save-cookies abc_cookies.txt http://epaper.abc.com/PagePrint/16_08_2011_001.pdf --debug SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc DEBUG output created by Wget 1.11.4 on Windows-MinGW. Stored cookie epaper.abc.com -1 (ANY) / <session> <insecure> [expiry none] ASP.NET_SessionId owcrje55yl45kgmhn43gq145 --2011-08-18 08:16:12-- http://epaper.abc.com/PagePrint/16_08_2011_001.pdf Resolving epaper.abc.com... seconds 0.00, 999.999.99.99 Caching epaper.abc.com => 999.999.99.99 Connecting to epaper.abc.com|999.999.99.99|:80... seconds 0.00, connected. Created socket 300. Releasing 0x00598290 (new refcount 1). ---request begin--- GET /PagePrint/16_08_2011_001.pdf HTTP/1.0 Referer: http://epaper.abc.com/login.aspx User-Agent: Wget/1.11.4 Accept: */* Host: epaper.abc.com Connection: Keep-Alive Cookie: ASP.NET_SessionId=owcrje55yl45kgmhn43gq145 ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Connection: keep-alive Date: Thu, 18 Aug 2011 02:46:30 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 content-disposition: attachement; filename=Default_logo.gif Cache-Control: private Content-Type: image/GIF Content-Length: 4568 ---response end--- 200 OK Registered socket 300 for persistent reuse. Length: 4568 (4.5K) [image/GIF] Saving to: `16_08_2011_001.pdf' 100%[======================================================================================================================>] 4,568 7.74K/s in 0.6s 2011-08-18 08:16:14 (7.74 KB/s) - `16_08_2011_001.pdf' saved [4568/4568] Saving cookies to abc_cookies.txt. Done saving cookies. Contents of abc_cookies.txt epaper.abc.com FALSE / FALSE 0 ASP.NET_SessionId owcrje55yl45kgmhn43gq145

    Read the article

  • SharePoint 2010 PowerShell Script to Find All SPShellAdmins with Database Name

    - by Brian Jackett
    Problem     Yesterday on Twitter my friend @cacallahan asked for some help on how she could get all SharePoint 2010 SPShellAdmin users and the associated database name.  I spent a few minutes and wrote up a script that gets this information and decided I’d post it here for others to enjoy.     Background     The Get-SPShellAdmin commandlet returns a listing of SPShellAdmins for the given database Id you pass in, or the farm configuration database by default.  For those unfamiliar, SPShellAdmin access is necessary for non-admin users to run PowerShell commands against a SharePoint 2010 farm (content and configuration databases specifically).  Click here to read an excellent guest post article my friend John Ferringer (twitter) wrote on the Hey Scripting Guy! blog regarding granting SPShellAdmin access.  Solution     Below is the script I wrote (formatted for space and to include comments) to provide the information needed. Click here to download the script.   # declare a hashtable to store results $results = @{}   # fetch databases (only configuration and content DBs are needed) $databasesToQuery = Get-SPDatabase | Where {$_.Type -eq 'Configuration Database' -or $_.Type -eq 'Content Database'}   # for each database get spshelladmins and add db name and username to result $databasesToQuery | ForEach-Object {$dbName = $_.Name; Get-SPShellAdmin -database $_.id | ForEach-Object {$results.Add($dbName, $_.username)}}   # sort results by db name and pipe to table with auto sizing of col width $results.GetEnumerator() | Sort-Object -Property Name | ft -AutoSize     Conclusion     In this post I provided a script that outputs all of the SPShellAdmin users and the associated database names in a SharePoint 2010 farm.  Funny enough it actually took me longer to boot up my dev VM and PowerShell (~3 mins) than it did to write the first working draft of the script (~2 mins).  Feel free to use this script and modify as needed, just be sure to give credit back to the original author.  Let me know if you have any questions or comments.  Enjoy!         -Frog Out   Links PowerShell Hashtables http://technet.microsoft.com/en-us/library/ee692803.aspx SPShellAdmin Access Explained http://blogs.technet.com/b/heyscriptingguy/archive/2010/07/06/hey-scripting-guy-tell-me-about-permissions-for-using-windows-powershell-2-0-cmdlets-with-sharepoint-2010.aspx

    Read the article

  • Should a model binder populate all of the model?

    - by Richard
    Should a model binder populate all of the model, or only the bits that are being posted? For example, I am adding a product in my system and on the form i want the user to select which sites the new product will appear on. Therefore, in my model I want to populate a collection called "AllAvailableSites" to render the checkboxes for the user to choose from. I also need to populate the model with any chosen sites on a post in case the form does not validate, and I need to represent the form showing the initial selections. It would seem that I should let the model binder set the chosen sites on the model, and (once in the controller method) I set the "AllAvailableSites" on the model. Does that sound right? It seems more efficient to set everything in the model binder but someone is suggesting it is not quite right. I am grateful for any advice; I have to say that all the MVC model binding help online seems to cite really simple examples, nothing complicated. Do I really need a GET and a POST version of a method? Can't they just take the same view model? Then I check in my model binder if it is a GET/POST, and populate all the model accordingly.

    Read the article

  • Is the output of Eclipse's incremental java compiler used in production? Or is it simply to support Eclipse's features?

    - by Doug T.
    I'm new to Java and Eclipse. One of my most recent discoveries was how Eclipse comes shipped with its own java compiler (ejc) for doing incremental builds. Eclipse seems to by default output incrementally built class files to the projRoot/bin folder. I've noticed too that many projects come with ant files to build the project that uses the java compiler built into the system for doing the production builds. Coming from a Windows/Visual Studio world where Visual Studio is invoking the compiler for both production and debugging, I'm used to the IDE having a more intimate relationship with the command-line compiler. I'm used to the project being the make file. So my mental model is a little off. Is whats produced by Eclipse ever used in production? Or is it typically only used to support Eclipse's features (ie its intellisense/incremental building/etc)? Is it typical that for the final "release" build of a project, that ant, maven, or another tool is used to do the full build from the command line? Mostly I'm looking for the general convention in the Eclipse/Java community. I realize that there may be some outliers out there who DO use ecj in production, but is this generally frowned upon? Or is this normal/accepted practice?

    Read the article

  • Have you used nDepend?

    - by Nick Harrison
    Have you Used NDepend? I have often wanted to use it, but never spent the money on it.   I have developed many tools that try to do pieces of what NDepend does, but never with as much success as they reach. Put simply, it is a tool that will allow you to udnerstand and monitor the architecture of your software, and it does it in some pretty amazing ways. One of the most impressive features is something that they call Code Query Language.   It allows you to write queries very similar to SQL to track the performance of various software metrics and use this to identify areas that are out of compliance with your standards and architecture. For instance, once you have analyzed your project, you can write queries such as : SELECT METHODS WHERE IsPublic AND CouldBePrivate  You can also set up such queries to provide warnings if there are records returned.    You can incorporae this into your daily build and compare build against build. There are over 82 metrics included to allow you to view your code in a variety of angles. I have often advocated for a "Code Inventory" database to track the state of software and the ROI on software investments.    This tool alone will take you about 90% of the way there. If you are not using it yet,  I strongly recommend that you do!

    Read the article

  • can't run sqldeveloper on Ubuntu

    - by nazar_art
    I tried to install sqldeveloper by following way: Download SQL Developer from Oracle website (I chose Other Platforms download). Extract file to /opt: sudo unzip sqldeveloper-*-no-jre.zip -d /opt/ sudo chmod +x /opt/sqldeveloper/sqldeveloper.sh Linking over an in-path launcher for Oracle SQL Developer: sudo ln -s /opt/sqldeveloper/sqldeveloper.sh /usr/local/bin/sqldeveloper Edit /usr/local/bin/sqldeveloper.sh replace it's content to: #!/bin/bash cd /opt/sqldeveloper/sqldeveloper/bin ./sqldeveloper "$@" Run SQL Developer: sqldeveloper But it shows next output: nazar@lelyak-desktop:/opt/sqldeveloper? ./sqldeveloper.sh Oracle SQL Developer Copyright (c) 1997, 2014, Oracle and/or its affiliates. All rights reserved. LOAD TIME : 401# # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007f3b2dcacbe0, pid=20351, tid=139892273444608 # # JRE version: Java(TM) SE Runtime Environment (7.0_65-b17) (build 1.7.0_65-b17) # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode linux-amd64 compressed oops) # Problematic frame: # C 0x00007f3b2dcacbe0 # # Core dump written. Default location: /opt/sqldeveloper/sqldeveloper/bin/core or core.20351 # # An error report file with more information is saved as: # /tmp/hs_err_pid20351.log # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # /opt/sqldeveloper/sqldeveloper/bin/../../ide/bin/launcher.sh: line 1193: 20351 Aborted (core dumped) ${JAVA} "${APP_VM_OPTS[@]}" ${APP_ENV_VARS} -classpath ${APP_CLASSPATH} ${APP_MAIN_CLASS} "${APP_APP_OPTS[@]}" 134 nazar@lelyak-desktop:/opt/sqldeveloper? java -version java version "1.7.0_65" Java(TM) SE Runtime Environment (build 1.7.0_65-b17) Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode) Here is content of /tmp/hs_err_pid20351.log How to solve this trouble?

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-27

    - by Bob Rhubart
    Understanding Oracle BI 11g Security vs Legacy Oracle BI 10g | Christian Screen "After conducting a large amount of Oracle BI 10g to Oracle BI 11g upgrades and after writing the Oracle BI 11g book," says Oracle ACE Christian Screen, "I still continually get asked one of the most basic questions regarding security in Oracle BI 11g; How does it compare to Oracle BI 10g? The trail of questions typically goes on to what are the differences? And, how do we leverage our current Oracle BI 10g security table schema in Oracle BI 11g?" Process Oracle OER Events using a simple Web Service | Bob Webster Bob Webster's post "provides an example of a simple web service that processes Oracle Enterprise Repository (OER) Events. The service receives events from OER and utilizes the OER REX API to implement simple OER automations for selected event types." Oracle Fusion Middleware Security: Attaching OWSM policies to JRF-based web services clients | Andre Correa "OWSM (Oracle Web Services Manager) is Oracle's recommended method for securing SOAP web services," says Oracle Fusion Middleware A-Team member Andre Correa. "It provides agents that encapsulate the necessary logic to interact with the underlying software stack on both service and client sides. Such agents have their behavior driven by policies. OWSM ships with a bunch of policies that are adequate to most common real world scenarios." His detailed post shows how to make it happen. WebCenter Content (WCC) Trace Sections | ECM Architect ECM Architect Kevin Smith shares a detailed technical post covering WebCenter Content (WCC) Trace sections. Thought for the Day "A complex system that works is invariably found to have evolved from a simple system that worked." — John Gall Source: SoftwareQuotes.com

    Read the article

  • Azure &ndash; Part 5 &ndash; Repository Pattern for Table Service

    - by Shaun
    In my last post I created a very simple WCF service with the user registration functionality. I created an entity for the user data and a DataContext class which provides some methods for operating the entities such as add, delete, etc. And in the service method I utilized it to add a new entity into the table service. But I didn’t have any validation before registering which is not acceptable in a real project. So in this post I would firstly add some validation before perform the data creation code and show how to use the LINQ for the table service.   LINQ to Table Service Since the table service utilizes ADO.NET Data Service to expose the data and the managed library of ADO.NET Data Service supports LINQ we can use it to deal with the data of the table service. Let me explain with my current example: I would like to ensure that when register a new user the email address should be unique. So I need to check the account entities in the table service before add. If you remembered, in my last post I mentioned that there’s a method in the TableServiceContext class – CreateQuery, which will create a IQueryable instance from a given type of entity. So here I would create a method under my AccountDataContext class to return the IQueryable<Account> which named Load. 1: public class AccountDataContext : TableServiceContext 2: { 3: private CloudStorageAccount _storageAccount; 4:  5: public AccountDataContext(CloudStorageAccount storageAccount) 6: : base(storageAccount.TableEndpoint.AbsoluteUri, storageAccount.Credentials) 7: { 8: _storageAccount = storageAccount; 9:  10: var tableStorage = new CloudTableClient(_storageAccount.TableEndpoint.AbsoluteUri, 11: _storageAccount.Credentials); 12: tableStorage.CreateTableIfNotExist("Account"); 13: } 14:  15: public void Add(Account accountToAdd) 16: { 17: AddObject("Account", accountToAdd); 18: SaveChanges(); 19: } 20:  21: public IQueryable<Account> Load() 22: { 23: return CreateQuery<Account>("Account"); 24: } 25: } The method returns the IQueryable<Account> so that I can perform the LINQ operation on it. And back to my service class, I will use it to implement my validation. 1: public bool Register(string email, string password) 2: { 3: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 4: var accountToAdd = new Account(email, password) { DateCreated = DateTime.Now }; 5: var accountContext = new AccountDataContext(storageAccount); 6:  7: // validation 8: var accountNumber = accountContext.Load() 9: .Where(a => a.Email == accountToAdd.Email) 10: .Count(); 11: if (accountNumber > 0) 12: { 13: throw new ApplicationException(string.Format("Your account {0} had been used.", accountToAdd.Email)); 14: } 15:  16: // create entity 17: try 18: { 19: accountContext.Add(accountToAdd); 20: return true; 21: } 22: catch (Exception ex) 23: { 24: Trace.TraceInformation(ex.ToString()); 25: } 26: return false; 27: } I used the Load method to retrieve the IQueryable<Account> and use Where method to find the accounts those email address are the same as the one is being registered. If it has I through an exception back to the client side. Let’s run it and test from my simple client application. Oops! Looks like we encountered an unexpected exception. It said the “Count” is not support by the ADO.NET Data Service LINQ managed library. That is because the table storage managed library (aka. TableServiceContext) is based on the ADO.NET Data Service and it supports very limit LINQ operation. Although I didn’t find a full list or documentation about which LINQ methods it supports I could even refer a page on msdn here. It gives us a roughly summary of which query operation the ADO.NET Data Service managed library supports and which doesn't. As you see the Count method is not in the supported list. Not only the query operation, there inner lambda expression in the Where method are limited when using the ADO.NET Data Service managed library as well. For example if you added (a => !a.DateDeleted.HasValue) in the Where method to exclude those deleted account it will raised an exception said "Invalid Input". Based on my experience you should always use the simple comparison (such as ==, >, <=, etc.) on the simple members (such as string, integer, etc.) and do not use any shortcut methods (such as string.Compare, string.IsNullOrEmpty etc.). 1: // validation 2: var accountNumber = accountContext.Load() 3: .Where(a => a.Email == accountToAdd.Email) 4: .ToList() 5: .Count; 6: if (accountNumber > 0) 7: { 8: throw new ApplicationException(string.Format("Your account {0} had been used.", accountToAdd.Email)); 9: } We changed the a bit and try again. Since I had created an account with my mail address so this time it gave me an exception said that the email had been used, which is correct.   Repository Pattern for Table Service The AccountDataContext takes the responsibility to save and load the account entity but only for that specific entity. Is that possible to have a dynamic or generic DataContext class which can operate any kinds of entity in my system? Of course yes. Although there's no typical database in table service we can threat the entities as the records, similar with the data entities if we used OR Mapping. As we can use some patterns for ORM architecture here we should be able to adopt the one of them - Repository Pattern in this example. We know that the base class - TableServiceContext provide 4 methods for operating the table entities which are CreateQuery, AddObject, UpdateObject and DeleteObject. And we can create a relationship between the enmity class, the table container name and entity set name. So it's really simple to have a generic base class for any kinds of entities. Let's rename the AccountDataContext to DynamicDataContext and make the type of Account as a type parameter if it. 1: public class DynamicDataContext<T> : TableServiceContext where T : TableServiceEntity 2: { 3: private CloudStorageAccount _storageAccount; 4: private string _entitySetName; 5:  6: public DynamicDataContext(CloudStorageAccount storageAccount) 7: : base(storageAccount.TableEndpoint.AbsoluteUri, storageAccount.Credentials) 8: { 9: _storageAccount = storageAccount; 10: _entitySetName = typeof(T).Name; 11:  12: var tableStorage = new CloudTableClient(_storageAccount.TableEndpoint.AbsoluteUri, 13: _storageAccount.Credentials); 14: tableStorage.CreateTableIfNotExist(_entitySetName); 15: } 16:  17: public void Add(T entityToAdd) 18: { 19: AddObject(_entitySetName, entityToAdd); 20: SaveChanges(); 21: } 22:  23: public void Update(T entityToUpdate) 24: { 25: UpdateObject(entityToUpdate); 26: SaveChanges(); 27: } 28:  29: public void Delete(T entityToDelete) 30: { 31: DeleteObject(entityToDelete); 32: SaveChanges(); 33: } 34:  35: public IQueryable<T> Load() 36: { 37: return CreateQuery<T>(_entitySetName); 38: } 39: } I saved the name of the entity type when constructed for performance matter. The table name, entity set name would be the same as the name of the entity class. The Load method returned a generic IQueryable instance which supports the lazy load feature. Then in my service class I changed the AccountDataContext to DynamicDataContext and that's all. 1: var accountContext = new DynamicDataContext<Account>(storageAccount); Run it again and register another account. The DynamicDataContext now can be used for any entities. For example, I would like the account has a list of notes which contains 3 custom properties: Account Email, Title and Content. We create the note entity class. 1: public class Note : TableServiceEntity 2: { 3: public string AccountEmail { get; set; } 4: public string Title { get; set; } 5: public string Content { get; set; } 6: public DateTime DateCreated { get; set; } 7: public DateTime? DateDeleted { get; set; } 8:  9: public Note() 10: : base() 11: { 12: } 13:  14: public Note(string email) 15: : base(email, string.Format("{0}_{1}", email, Guid.NewGuid().ToString())) 16: { 17: AccountEmail = email; 18: } 19: } And no need to tweak the DynamicDataContext we can directly go to the service class to implement the logic. Notice here I utilized two DynamicDataContext instances with the different type parameters: Note and Account. 1: public class NoteService : INoteService 2: { 3: public void Create(string email, string title, string content) 4: { 5: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 6: var accountContext = new DynamicDataContext<Account>(storageAccount); 7: var noteContext = new DynamicDataContext<Note>(storageAccount); 8:  9: // validate - email must be existed 10: var accounts = accountContext.Load() 11: .Where(a => a.Email == email) 12: .ToList() 13: .Count; 14: if (accounts <= 0) 15: throw new ApplicationException(string.Format("The account {0} does not exsit in the system please register and try again.", email)); 16:  17: // save the note 18: var noteToAdd = new Note(email) { Title = title, Content = content, DateCreated = DateTime.Now }; 19: noteContext.Add(noteToAdd); 20: } 21: } And updated our client application to test the service. I didn't implement any list service to show all notes but we can have a look on the local SQL database if we ran it at local development fabric.   Summary In this post I explained a bit about the limited LINQ support for the table service. And then I demonstrated about how to use the repository pattern in the table service data access layer and make the DataContext dynamically. The DynamicDataContext I created in this post is just a prototype. In fact we should create the relevant interface to make it testable and for better structure we'd better separate the DataContext classes for each individual kind of entity. So it should have IDataContextBase<T>, DataContextBase<T> and for each entity we would have class AccountDataContext<Account> : IDataContextBase<Account>, DataContextBase<Account> { … } class NoteDataContext<Note> : IDataContextBase<Note>, DataContextBase<Note> { … }   Besides the structured data saving and loading, another common scenario would be saving and loading some binary data such as images, files. In my next post I will show how to use the Blob Service to store the bindery data - make the account be able to upload their logo in my example.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Weblogic 10.3 weblogic.jar classpath issues

    - by user63063
    I am upgrading an old WLS8.1 app to 10.3 (11g) My ant build includes only the new weblogic.jar in the compile classpath and the build runs with no issues but when I include weblogic.jar as a libeary in the IDE (Intellij) i see many unresolved imports (for example: weblogic.xml.xpath.DOMXPath) when I check the weblogic.jar I see that the classes are indeed missing from it. compiling with verbose revealed that by including weblogic.jar in the ant classpath, many other jars in the BEA_HOME/modules are loaded to the classpath as well (for example: com.bea.core.xml.weblogic.xpath_1.4.0.0.jar) Can anyone explain what is going on? How can I fix my IDE classpath - do I need to import all the module-jars? Many of the module jars seems like they are there to support old deprecated weblogic 8 APIs (like: weblogic.xml.xpath.DOMXPath) how can I exclude these modules from my ant build? (I want to expose the APIs I need to upgrade) Thanks, NY

    Read the article

  • How to properly learn ASP.NET MVC

    - by Qmal
    Hello everyone, I have a question to ask and maybe some of you will think it's lame, but I hope someone will get me on the right track. So I've been programming for quite some time now. I started programming when I was about 13 or so on Delphi, but when I was about 17 or so I switched to C# and now I really like to program with it, mostly because it's syntax is very appealing to me, plus managed code is very good. So it all was good and fun but then I had some job openings that I of course took, but the problem with them is that they all are about web programming. And I had to learn PHP and MVC fundamentals. And I somewhat did while building applications using CI and Kohana framework. But I want to build websites using ASP.NET because I like C# much, much more than PHP. TL;DR I want to know ASP.NET MVC but I don't know where to start. What I want to start with is build some simple like CMS. But I don't know where to start. Do I use same logic as PHP? What do I use for DB connections? And also, if I plan to host something that is build with ASP.NET MVC3 on a hosting provider do I need to buy some kind of license?

    Read the article

  • What is your strategy for converting RC builds into retail?

    - by Matthew PK
    We're trying to implement a strategy for how we transition our builds from RC to released retail code. When we label a build as a release candidate, we send it to QA for regression. If they approve it, that RC then becomes our released retail code. I liked the idea of "obvious" labeling of versions so that a user knows whether they have a beta or an RC or retail code... where you would have some obvious watermark in non-retail code (think Windows 7 where the RC or non-genuine builds watermark in the bottom right). ... but it seemed strange to us to manipulate the project (to remove the watermark) once it passed regression. If QA certified version a.b.c.d then our retail code should be that same version, not a.b.c.d+1 what strategies have you employed to clearly label non-release software versions without incrementing your build to disable the watermarks in your retail code? One idea I've considered is writing your build to look for a signed file in the installer archive... non-release code wouldn't include this file and so the app would know to display a watermark. But even this seems like QA is then working with non-release code. Ideas?

    Read the article

  • Working with Timelines with LINQ to Twitter

    - by Joe Mayo
    When first working with the Twitter API, I thought that using SinceID would be an effective way to page through timelines. In practice it doesn’t work well for various reasons. To explain why, Twitter published an excellent document that is a must-read for anyone working with timelines: Twitter Documentation: Working with Timelines This post shows how to implement the recommended strategies in that document by using LINQ to Twitter. You should read the document in it’s entirety before moving on because my explanation will start at the bottom and work back up to the top in relation to the Twitter document. What follows is an explanation of SinceID, MaxID, and how they come together to help you efficiently work with Twitter timelines. The Role of SinceID Specifying SinceID says to Twitter, “Don’t return tweets earlier than this”. What you want to do is store this value after every timeline query set so that it can be reused on the next set of queries.  The next section will explain what I mean by query set, but a quick explanation is that it’s a loop that gets all new tweets. The SinceID is a backstop to avoid retrieving tweets that you already have. Here’s some initialization code that includes a variable named sinceID that will be used to populate the SinceID property in subsequent queries: // last tweet processed on previous query set ulong sinceID = 210024053698867204; ulong maxID; const int Count = 10; var statusList = new List<status>(); Here, I’ve hard-coded the sinceID variable, but this is where you would initialize sinceID from whatever storage you choose (i.e. a database). The first time you ever run this code, you won’t have a value from a previous query set. Initially setting it to 0 might sound like a good idea, but what if you’re querying a timeline with lots of tweets? Because of the number of tweets and rate limits, your query set might take a very long time to run. A caveat might be that Twitter won’t return an entire timeline back to Tweet #0, but rather only go back a certain period of time, the limits of which are documented for individual Twitter timeline API resources. So, to initialize SinceID at too low of a number can result in a lot of initial tweets, yet there is a limit to how far you can go back. What you’re trying to accomplish in your application should guide you in how to initially set SinceID. I have more to say about SinceID later in this post. The other variables initialized above include the declaration for MaxID, Count, and statusList. The statusList variable is a holder for all the timeline tweets collected during this query set. You can set Count to any value you want as the largest number of tweets to retrieve, as defined by individual Twitter timeline API resources. To effectively page results, you’ll use the maxID variable to set the MaxID property in queries, which I’ll discuss next. Initializing MaxID On your first query of a query set, MaxID will be whatever the most recent tweet is that you get back. Further, you don’t know what MaxID is until after the initial query. The technique used in this post is to do an initial query and then use the results to figure out what the next MaxID will be.  Here’s the code for the initial query: var userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.SinceID == sinceID && tweet.Count == Count select tweet) .ToList(); statusList.AddRange(userStatusResponse); // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; The query above sets both SinceID and Count properties. As explained earlier, Count is the largest number of tweets to return, but the number can be less. A couple reasons why the number of tweets that are returned could be less than Count include the fact that the user, specified by ScreenName, might not have tweeted Count times yet or might not have tweeted at least Count times within the maximum number of tweets that can be returned by the Twitter timeline API resource. Another reason could be because there aren’t Count tweets between now and the tweet ID specified by sinceID. Setting SinceID constrains the results to only those tweets that occurred after the specified Tweet ID, assigned via the sinceID variable in the query above. The statusList is an accumulator of all tweets receive during this query set. To simplify the code, I left out some logic to check whether there were no tweets returned. If  the query above doesn’t return any tweets, you’ll receive an exception when trying to perform operations on an empty list. Yeah, I cheated again. Besides querying initial tweets, what’s important about this code is the final line that sets maxID. It retrieves the lowest numbered status ID in the results. Since the lowest numbered status ID is for a tweet we already have, the code decrements the result by one to keep from asking for that tweet again. Remember, SinceID is not inclusive, but MaxID is. The maxID variable is now set to the highest possible tweet ID that can be returned in the next query. The next section explains how to use MaxID to help get the remaining tweets in the query set. Retrieving Remaining Tweets Earlier in this post, I defined a term that I called a query set. Essentially, this is a group of requests to Twitter that you perform to get all new tweets. A single query might not be enough to get all new tweets, so you’ll have to start at the top of the list that Twitter returns and keep making requests until you have all new tweets. The previous section showed the first query of the query set. The code below is a loop that completes the query set: do { // now add sinceID and maxID userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.Count == Count && tweet.SinceID == sinceID && tweet.MaxID == maxID select tweet) .ToList(); if (userStatusResponse.Count > 0) { // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; statusList.AddRange(userStatusResponse); } } while (userStatusResponse.Count != 0 && statusList.Count < 30); Here we have another query, but this time it includes the MaxID property. The SinceID property prevents reading tweets that we’ve already read and Count specifies the largest number of tweets to return. Earlier, I mentioned how it was important to check how many tweets were returned because failing to do so will result in an exception when subsequent code runs on an empty list. The code above protects against this problem by only working with the results if Twitter actually returns tweets. Reasons why there wouldn’t be results include: if the first query got all the new tweets there wouldn’t be more to get and there might not have been any new tweets between the SinceID and MaxID settings of the most recent query. The code for loading the returned tweets into statusList and getting the maxID are the same as previously explained. The important point here is that MaxID is being reset, not SinceID. As explained in the Twitter documentation, paging occurs from the newest tweets to oldest, so setting MaxID lets us move from the most recent tweets down to the oldest as specified by SinceID. The two loop conditions cause the loop to continue as long as tweets are being read or a max number of tweets have been read.  Logically, you want to stop reading when you’ve read all the tweets and that’s indicated by the fact that the most recent query did not return results. I put the check to stop after 30 tweets are reached to keep the demo from running too long – in the console the response scrolls past available buffer and I wanted you to be able to see the complete output. Yet, there’s another point to be made about constraining the number of items you return at one time. The Twitter API has rate limits and making too many queries per minute will result in an error from twitter that LINQ to Twitter raises as an exception. To use the API properly, you’ll have to ensure you don’t exceed this threshold. Looking at the statusList.Count as done above is rather primitive, but you can implement your own logic to properly manage your rate limit. Yeah, I cheated again. Summary Now you know how to use LINQ to Twitter to work with Twitter timelines. After reading this post, you have a better idea of the role of SinceID - the oldest tweet already received. You also know that MaxID is the largest tweet ID to retrieve in a query. Together, these settings allow you to page through results via one or more queries. You also understand what factors affect the number of tweets returned and considerations for potential error handling logic. The full example of the code for this post is included in the downloadable source code for LINQ to Twitter.   @JoeMayo

    Read the article

  • Modded Portal Gun Levitates a Companion Cube [Video]

    - by Jason Fitzpatrick
    This cleverly designed Portal gun prop levitates a model Companion Cube; the whole setup just begs to be paired with a Halloween costume. Courtesy of Caleb over at Hack A Day: I was out to lunch with a couple friends, brainstorming ideas for fun projects when one of them says “Wouldn’t it be cool if we could build a working gravity gun?”. We all immediately concurred that while it would in fact be cool, it is also a silly proposition. However, only a few seconds later, I realized we could do a display piece that emulated this concept very easily. Floating magnetic globes have been around for quite some time. I determined I would tear the guts out of a stock floating globe and mount it on a portal gun, since they’re easier to find than a gravity gun. I would also build a custom companion cube to be the correct size and weight necessary. Watch the video above and then check out the link below for more information on the build. HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems

    Read the article

  • Handling early/late/dropped packets for interpolation in a 3D multiplayer game

    - by Ben Cracknell
    I'm working on a multiplayer game that for the purposes of this question, is most similar to Team Fortress. Each network data packet will contain the 3D position of the target moving object. (this object could be another player) The packets are sent on a fixed interval, and linear interpolation will be used to smooth the transition between packets. Under normal circumstances, interpolation will occur between the second-to-last packet, and the last packet received. The linear interpolation algorithm is the same as this post: Interpolating positions in a multiplayer game I have the same issue as in that post, but the answers don't seem like they will work in my situation. Consider the following scenario: Normal packet timing, everything is okay The next expected packet is late. That's okay, we'll just extrapolate based on previous positions The late packet eventually arrives with corrections to our extrapolation. Now what do we do with its information? The answers on the above post suggest we should just interpolate to this new packet's position, but that would not work at all. If we have already extrapolated past that point in time, moving back would cause rubber-banding. The issue is similar in the case of an early or dropped packet. So I believe what I am looking for is some way to smoothly deal with new information in an ongoing interpolation/extrapolation process. Since I might be moving on to quadratic or even cubic interpolation, it would be great if the same solutiuon could be applied to those as well.

    Read the article

  • ArchBeat Link-o-Rama for 2012-10-10

    - by Bob Rhubart
    Oracle's Analytics, Engineered Systems, and Big Data Strategy | Mark Rittman Part 1 of 3 in Oracle ACE Director Mark Rittman's series on Oracle Exalytics, Oracle R Enterprise and Endeca. Series: How to Kill the Architecture Department? Part 1 | Xebia Blog Don't let the title fool you. This is not an anti-architecture post. Rather, this post, part 1 of a now four-part series, offers suggestions for preserving architecture in a form that better supports agile organizations. BPM Suite configure BAM Adapter | Peter Paul van der Beek "To have the BPM server push events to BAM – Business Activity Monitoring – we have to configure the BPM suite to use the BAM Adapter," says Peter Paul van de Beek. "The BAM Adapter is configured (like other SOA Suite and BPM Adapters) in the WebLogic Server Console." Peter Paul shows you how in this brief post. A case for not installing your own software | James Gentsch "I look selfishly forward to cloud computing and engineered systems dramatically reducing the occurrence of problems triggered by unforeseen environmental situations in the software I am responsible for," says James Gentsch. "I think this is an evolutionary game changer that will be a huge benefit to the reliability and consistent performance of the software for my customers, and may make 'well, it works here' a well forgotten phase for future software developers." Thought for the Day "I'm a strong believer in being minimalistic. Unless you actually are going to solve the general problem, don't try and put in place a framework for solving a specific one, because you don't know what that framework should look like." — Anders Hejlsberg Source: SoftwareQuotes.com`

    Read the article

  • Help me select a "Simpler" target to create a new language: .NET, LLVM, Go, Own VM

    - by mamcx
    Lets define "Simple". This is my first language. I have no previous experience I will not dedicate +4 years to learn it properly. I'm a professional software [developer], but as an amateur in this area, I want instant gratification. If the idea shows a future, I could rewrite it. I don't want to do everything from scratch. In fact, if there exists a way to get GO (for example), change its syntax, add some sugar, give some extra functions and leave intact everything else, that would be perfect! From the example of coffescript/scala I think is better to build on top of some rich runtime like .NET/GO so I don't need to rewrite everything. HOWEVER, if is better other way, no problem for the first try! I want it in a week. I need it in a week so it will really take a month. Then it truly takes 3 months. But I don't want to put more that 3 months on this. I could reduce the scope of my language, but I hope the tools will help me a lot... I want to build a new language. Similar to python, but typed. I wonder what to build it on top of. I like the idea of building on top of GO. To get their sane (IMHO) OO paradigm (I plan to do the same, using interfaces, not inheritance), get goroutines and some other stuff. In my naive thinking I imagine that spit another language could help me to debug it more easily. However, look like everyone is building on top of something like .NET (don't like Java), LLVM or make it own VM. I read http://createyourproglang.com/ (great!) and the part of the VM look "easy" to me. So, what I need is the proper criteria and question I need to know in advance to have a fair shot at make this.

    Read the article

  • Problem installing drivers for Brother DCP-1400

    - by ToddB
    I have a problem installing drivers for my DCP-1400 printer. I am using Ubuntu 11.10 and am new to Ubuntu and Linux in general. I downloaded the lpr and cupswrapper files from the Brother linux site into my Download directory. I next run Terminal and switch to the Downloads directory. At the prompt I type in the following: sudo dpkg -i --force-all dcp1400lpr-1.1.2-1.386.deb It asks for my password which I type in and press Enter. I get the following errors: (Reading database ... 155195 files and directories currently installed.) Preparing to replace dcp1400lpr 1.1.2-1 (using dcp1400lpr-1.1.2-1.i386.deb) ... Unpacking replacement dcp1400lpr ... /var/lib/dpkg/info/dcp1400lpr.postrm: 3: /etc/init.d/lpd: not found dpkg: warning: subprocess old post-removal script returned error exit status 127 dpkg - trying script from the new package instead ... /var/lib/dpkg/tmp.ci/postrm: 3: /etc/init.d/lpd: not found dpkg: error processing dcp1400lpr-1.1.2-1.i386.deb (--install): subprocess new post-removal script returned error exit status 127 /var/lib/dpkg/tmp.ci/postrm: 3: /etc/init.d/lpd: not found dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 127 Errors were encountered while processing: dcp1400lpr-1.1.2-1.i386.deb I now notice when I open the Synaptics package manager I get the message: E: The package dcp1400lpr needs to be reinstalled, but I can't find an archive for it. E: Internal error opening cache (1). Please report. Any help provided would be appreciated. Thank you TB

    Read the article

  • Is 'Protection' an acceptable Java class name

    - by jonny
    This comes from a closed thread at stack overflow, where there are already some useful answers, though a commenter suggested I post here. I hope this is ok! I'm trying my best to write good readable, code, but often have doubts in my work! I'm creating some code to check the status of some protected software, and have created a class which has methods to check whether the software in use is licensed (there is a separate Licensing class). I've named the class 'Protection', which is currently accessed, via the creation of an appProtect object. The methods in the class allow to check a number of things about the application, in order to confirm that it is in fact licensed for use. Is 'Protection' an acceptable name for such a class? I read somewhere that if you have to think to long in names of methods, classes, objects etc, then perhaps you may not be coding in an Object Oriented way. I've spent a lot of time thinking about this before making this post, which has lead me to doubt the suitability of the name! In creating (and proof reading) this post, I'm starting to seriously doubt my work so far. I'm also thinking I should probably rename the object to applicationProtection rather than appProtect (though am open to any comments on this too?). I'm posting non the less, in the hope that I'll learn something from others views/opinions, even if they're simply confirming I've "done it wrong"!

    Read the article

  • Team Foundation Service Preview now open for all!

    - by Tarun Arora
    The concept of TFS in the cloud was first presented back in early 2010, the product team worked hard to preview a constantly evolving solution at the BUILD conference last year and after having completed 31 Sprints today the preview service has been opened for all. No more invitation codes required, TfsPreview has been made public! “Since we announced the Team Foundation Service Preview at the BUILD conference last year, we’ve limited the on boarding of new customers by requiring invitation codes to create accounts.  The main reason for this has been to control the growth of the service to make sure it didn’t run away from us and end up with a bad user experience.  In this time period, we’ve continued to work on our infrastructure, performance, scale, monitoring, management and, of course, some cool new features like cloud build. ”   - Brian Harry Since the service is still in preview, it is free for all… If you haven’t, now is the best time to try out the offering. There is no fixed time line on how long before service becomes chargeable but the terms of service support production use, the service is reliable and the product team committed to carry all of your data forward into production. “The service will remain in “preview” for a while longer while we work through additional features like data portability, commercial terms, etc but the terms of service support production use, the service is reliable and we expect to carry all of your data forward into production. ”  - Brian Harry As of today it’s possible to use TFS Preview with VS 2012 RC, VS 2010 SP1, VS 2008 SP1, the service currently does not work with VS 2005, this is something the product team is actively working on. You can refer to Brian’s announcement blog post here, http://blogs.msdn.com/b/bharry/archive/2012/06/11/team-foundation-service-preview-is-public.aspx

    Read the article

  • Firefox freezes frequently

    - by user141740
    Good day, The application Firefox freezes very frequently and I have to use 'force quit" to get out and hence I lose all my activities and it is extremely frustrating. Only in one occasion, there was a pop-out message saying that this problem was going to be tracked but in all other occasions there is no tracking and no message I posted this error on Ubuntu community and it was stopped and I was told to post it on launchpad. I did try to do so with no success as after reading pages and pages which i really do not understand who would read them and why so many ridiculous and tedious rules and information, i even could not find the place or the way to post this bug. And I thought of this ASkUBUNTU and so i am posting here in the hope for some useful help and I have to mention I am new to Linux. Just a few minutes ago, I opened the Firefox through the Terminal and it crashed very quickly and there are some error messages and i copy and paste them hoping they can help thank you in advance and look forward to your help and solving this frustrating problem/bug and if you wish you may post it on Launchpad or do with report as you wish as long as the problem is solved. And here the messages appearing in Terminal, after Firefox crashed: ** (firefox:4099): WARNING **: Error calling add_icon method of Contextcontext: Timeout was reached ** (firefox:4099): WARNING **: Error calling set_homepage method of Contextcontext: Timeout was reached ** (firefox:4099): WARNING **: Error calling clear_indicator method of Indicatorcontext: Timeout was reached ** (firefox:4099): WARNING **: Error calling clear_indicator method of Indicatorcontext: Timeout was reached ** (firefox:4099): WARNING **: Error calling clear_indicator method of Indicatorcontext: Timeout was reached ** (firefox:4099): WARNING **: Error calling set_view_location method of Contextcontext: Timeout was reached ** (firefox:4099): WARNING **: Error calling set_view_window method of Contextcontext: Timeout was reached ** (firefox:4099): WARNING **: Error calling set_view_is_active method of Contextcontext: Timeout was reached Killed

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >