Search Results

Search found 10538 results on 422 pages for 'push technology'.

Page 82/422 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • How to make Stack.Pop threadsafe

    - by user260197
    I am using the BlockingQueue code posted in this question, but realized I needed to use a Stack instead of a Queue given how my program runs. I converted it to use a Stack and renamed the class as needed. For performance I removed locking in Push, since my producer code is single threaded. My problem is how can thread working on the (now) thread safe Stack know when it is empty. Even if I add another thread safe wrapper around Count that locks on the underlying collection like Push and Pop do, I still run into the race condition that access Count and then Pop are not atomic. Possible solutions as I see them (which is preferred and am I missing any that would work better?): Consumer threads catch the InvalidOperationException thrown by Pop(). Pop() return a nullptr when _stack-Count == 0, however C++-CLI does not have the default() operator ala C#. Pop() returns a boolean and uses an output parameter to return the popped element. Here is the code I am using right now: generic <typename T> public ref class ThreadSafeStack { public: ThreadSafeStack() { _stack = gcnew Collections::Generic::Stack<T>(); } public: void Push(T element) { _stack->Push(element); } T Pop(void) { System::Threading::Monitor::Enter(_stack); try { return _stack->Pop(); } finally { System::Threading::Monitor::Exit(_stack); } } public: property int Count { int get(void) { System::Threading::Monitor::Enter(_stack); try { return _stack->Count; } finally { System::Threading::Monitor::Exit(_stack); } } } private: Collections::Generic::Stack<T> ^_stack; };

    Read the article

  • Help me understand Inorder Traversal without using recursion

    - by vito
    I am able to understand preorder traversal without using recursion, but I'm having a hard time with inorder traversal. I just don't seem to get it, perhaps, because I haven't understood the inner working of recursion. This is what I've tried so far: def traverseInorder(node): lifo = Lifo() lifo.push(node) while True: if node is None: break if node.left is not None: lifo.push(node.left) node = node.left continue prev = node while True: if node is None: break print node.value prev = node node = lifo.pop() node = prev if node.right is not None: lifo.push(node.right) node = node.right else: break The inner while-loop just doesn't feel right. Also, some of the elements are getting printed twice; may be I can solve this by checking if that node has been printed before, but that requires another variable, which, again, doesn't feel right. Where am I going wrong? I haven't tried postorder traversal, but I guess it's similar and I will face the same conceptual blockage there, too. Thanks for your time! P.S.: Definitions of Lifo and Node: class Node: def __init__(self, value, left=None, right=None): self.value = value self.left = left self.right = right class Lifo: def __init__(self): self.lifo = () def push(self, data): self.lifo = (data, self.lifo) def pop(self): if len(self.lifo) == 0: return None ret, self.lifo = self.lifo return ret

    Read the article

  • Intel Assembly Programming

    - by Kay
    class MyString{ char buf[100]; int len; boolean append(MyString str){ int k; if(this.len + str.len>100){ for(k=0; k<str.len; k++){ this.buf[this.len] = str.buf[k]; this.len ++; } return false; } return true; } } Does the above translate to: start: push ebp ; save calling ebp mov ebp, esp ; setup new ebp push esi ; push ebx ; mov esi, [ebp + 8] ; esi = 'this' mov ebx, [ebp + 14] ; ebx = str mov ecx, 0 ; k=0 mov edx, [esi + 200] ; edx = this.len append: cmp edx + [ebx + 200], 100 jle ret_true ; if (this.len + str.len)<= 100 then ret_true cmp ecx, edx jge ret_false ; if k >= str.len then ret_false mov [esi + edx], [ebx + 2*ecx] ; this.buf[this.len] = str.buf[k] inc edx ; this.len++ aux: inc ecx ; k++ jmp append ret_true: pop ebx ; restore ebx pop esi ; restore esi pop ebp ; restore ebp ret true ret_false: pop ebx ; restore ebx pop esi ; restore esi pop ebp ; restore ebp ret false My greatest difficulty here is figuring out what to push onto the stack and the math for pointers. NOTE: I'm not allowed to use global variables and i must assume 32-bit ints, 16-bit chars and 8-bit booleans.

    Read the article

  • Is there some advantage to filling a stack with nils and interpreting the "top" as the last non-nil value?

    - by dwilbank
    While working on a rubymonk exercise, I am asked to implement a stack with a hard size limit. It should return 'nil' if I try to push too many values, or if I try to pop an empty stack. My solution is below, followed by their solution. Mine passes every test I can give it in my IDE, while it fails rubymonk's test. But that isn't my question. Question is, why did they choose to fill the stack with nils instead of letting it shrink and grow like it does in my version? It just makes their code more complex. Here's my solution: class Stack def initialize(size) @max = size @store = Array.new end def pop empty? ? nil : @store.pop end def push(element) return nil if full? @store.push(element) end def size @store.size end def look @store.last end private def full? @store.size == @max end def empty? @store.size == 0 end end and here is the accepted answer class Stack def initialize(size) @size = size @store = Array.new(@size) @top = -1 end def pop if empty? nil else popped = @store[@top] @store[@top] = nil @top = @top.pred popped end end def push(element) if full? or element.nil? nil else @top = @top.succ @store[@top] = element self end end def size @size end def look @store[@top] end private def full? @top == (@size - 1) end def empty? @top == -1 end end

    Read the article

  • Intel IA-32 Assembly

    - by Kay
    I'm having a bit of difficulty converting the following java code into Intel IA-32 Assembly: class Person() { char name [8]; int age; void printName() {...} static void printAdults(Person [] list) { for(int k = 0; k < 100; k++){ if (list[k].age >= 18) { list[k].printName(); } } } } My attempt is: Person: push ebp; save callers ebp mov ebp, esp; setup new ebp push esi; esi will hold name push ebx; ebx will hold list push ecx; ecx will hold k init: mov esi, [ebp + 8]; mov ebx, [ebp + 12]; mov ecx, 0; k=0 forloop: cmp ecx, 100; jge end; if k>= 100 then break forloop cmp [ebx + 4 * ecx], 18 ; jl auxloop; if list[k].age < 18 then go to auxloop jmp printName; printName: auxloop: inc ecx; jmp forloop; end: pop ecx; pop ebx; pop esi; pop ebp; Is my code correct? NOTE: I'm not allowed to use global variables.

    Read the article

  • How can I create a new Person object correctly in Javascript?

    - by TimDog
    I'm still struggling with this concept. I have two different Person objects, very simply: ;Person1 = (function() { function P (fname, lname) { P.FirstName = fname; P.LastName = lname; return P; } P.FirstName = ''; P.LastName = ''; var prName = 'private'; P.showPrivate = function() { alert(prName); }; return P; })(); ;Person2 = (function() { var prName = 'private'; this.FirstName = ''; this.LastName = ''; this.showPrivate = function() { alert(prName); }; return function(fname, lname) { this.FirstName = fname; this.LastName = lname; } })(); And let's say I invoke them like this: var s = new Array(); //Person1 s.push(new Person1("sal", "smith")); s.push(new Person1("bill", "wonk")); alert(s[0].FirstName); alert(s[1].FirstName); s[1].showPrivate(); //Person2 s.push(new Person2("sal", "smith")); s.push(new Person2("bill", "wonk")); alert(s[2].FirstName); alert(s[3].FirstName); s[3].showPrivate(); The Person1 set alerts "bill" twice, then alerts "private" once -- so it recognizes the showPrivate function, but the local FirstName variable gets overwritten. The second Person2 set alerts "sal", then "bill", but it fails when the showPrivate function is called. The new keyword here works as I'd expect, but showPrivate (which I thought was a publicly exposed function within the closure) is apparently not public. I want to get my object to have distinct copies of all local variables and also expose public methods -- I've been studying closures quite a bit, but I'm still confused on this one. Thanks for your help.

    Read the article

  • Formvalidator from the iframe.

    - by basit74
    Hello I have a following formvalidatior function in my document. function formValidator(formid) { var form = cic.$(formid); if(!form) return (''); var errors = []; var len = form.elements.length; for(var elementIdx = 0; elementIdx < len; elementIdx++) { var element = form.elements[elementIdx]; if(!element && !element.getAttribute('validationtype')) return (''); switch (element.getAttribute('validationtype')) { case 'text' : if(cic.getValue(element).strip() == "") errors.push(element.getAttribute('validationmsg')); break; case 'email' : if(!cic.isEmail(cic.getValue(element))) errors.push(element.getAttribute('validationmsg')); break; case 'numeric' : if(isNaN(cic.getValue(element).replace(',', '.'))) errors.push(element.getAttribute('validationmsg')); break; case 'confirm' : if(cic.getValue(cic.$(element.getAttribute('sourcefield'))) !== cic.getValue(element)) errors.push(element.getAttribute('validationmsg')); break; } } return (errors.length > 0) ? '<li>' + errors.uniq().join("<li>") : ''; } It works fine, now I have an Iframe in my document, and that I frame contains the form to validate. What will be the best practice to change this function in such a way that it can validate document forms and iframe from simeltaniously. Thanks

    Read the article

  • How do you prevent Git from printing 'remote:' on each line of the output of a post-recieve hook?

    - by Matt Hodan
    I recently configured an EC2 instance with a Git deployment workflow that resembles Heroku, but I can't seem to figure out how Heroku prevents the Git post-receive hook from outputting 'remote:' on each line. Consider the following two examples (one from my EC2 project and one from a Heroku project): My EC2 project: git push prod master Counting objects: 9, done. Delta compression using up to 2 threads. Compressing objects: 100% (5/5), done. Writing objects: 100% (5/5), 456 bytes, done. Total 5 (delta 3), reused 0 (delta 0) remote: remote: Receiving push remote: Deploying updated files (by resetting HEAD) remote: HEAD is now at bf17da8 test commit remote: Running bundler to install gem dependencies remote: Fetching source index for http://rubygems.org/ remote: Installing rake (0.8.7) remote: Installing abstract (1.0.0) ... remote: Installing railties (3.0.0) remote: Installing rails (3.0.0) remote: Your bundle is complete! It was installed into ./.bundle/gems remote: Launching (by restarting Passenger)... done remote: To ssh://[email protected]/~/apps/app_name e8bd06f..bf17da8 master -> master Heroku: $> git push heroku master Counting objects: 179, done. Delta compression using up to 2 threads. Compressing objects: 100% (89/89), done. Writing objects: 100% (105/105), 42.70 KiB, done. Total 105 (delta 53), reused 0 (delta 0) -----> Heroku receiving push -----> Rails app detected -----> Gemfile detected, running Bundler version 1.0.3 Unresolved dependencies detected; Installing... Using --without development:test Fetching source index for http://rubygems.org/ Installing rake (0.8.7) Installing abstract (1.0.0) ... Installing railties (3.0.0) Installing rails (3.0.0) Your bundle is complete! It was installed into ./.bundle/gems Compiled slug size is 4.8MB -----> Launching... done http://your_app_name.heroku.com deployed to Heroku To [email protected]:your_app_name.git 3bf6e8d..642f01a master -> master

    Read the article

  • Git from localhost to remotehost with a team of three

    - by Mark McDonnell
    Hi, I'm completely new to Git. I've only just worked out how to use Github in a basic way (e.g. push my local file changes to Github - so I've not done 'pulling' down of content from Github and 'merging' it into my localhost version or anything like that). I had a look over at this existing question - Git: localhost remote development remote production - but I think it may have been a bit advanced for me at this stage as I didn't quite understand the terminology that most of the people were using. What I would like to achieve is to have a local server set-up that my team of developers can all 'push' to/'pull' from etc. And then have that local server upload any updated files automatically to our web server so we could see the updates live in the browser. I'm happy to get a server set-up in the office running Mac OSX Server and then installing Git on it and then getting the devs to write a shell script to push to the remote server but only if it was fairly easy for the devs local git to push to this new local server. I'm not a network engineer so I don't know what would need to be set-up for that to work, I know obviously we could set-up the server to be accessible via a local ip address like 192.168.0.xxx but not sure how that works with pushing to a git repository on that server? Would that literally be something like doing this on my local machine: git remote add MyGitFile git://192.168.0.xxx/MyGitFile.git ? Any ideas or advice you can give to a total Git newbie trying to help his team get a better work flow. Kind regards, Mark

    Read the article

  • How do you organise multiple git repositories?

    - by dbr
    With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the 'master' project, or I could checkout the entire thing. Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command.. Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I've written, websites I've made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup. The problem is, since it's a private repository, and git doesn't allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos) I could use the git submodule system, but it doesn't act how I want it too (submodules are pointers to other repositories, and don't really contain the actual code, so it's useless for backup) Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do git push github, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do git push backupdrive1, git push mymemorystick etc So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up?

    Read the article

  • Pushing a local mercurial repository to a remote server or cloning at server from local

    - by Samaursa
    I have a local repository that I have now decided to push to a remote server (for example, I have a host that allows mercurial repositories and I am also trying to push to bitbucket). The repository has a lot of files and is a little more than 200mb. Locally, I am able to clone the repository without problems. Now I have a lot of changes in this repository, and I have wasted a couple of days trying to figure out how to get the remote server to clone my repository. I cannot get hg serve to work outside of the LAN. I have tried everything. So instead, I created a new repository at the remote servers (both at the host and bitbucket) with nothing in it. Now I am pushing the complete repository that I have locally to these remote locations. So far it has been unsuccessful, as the push operation is stuck on searching for changes and does not give me any other useful output. I have let it go for about an hour with no change. Now my questions is, what am I doing wrong as far as hg serve is concerned? I can access it locally but not remotely (through DynDns - I have configured it properly and the router forwards the ports correctly) so that I can get the server to clone the repository the first time after which I will be pushing to it. My second question is, assuming the clone at server does not work (for example, if I was to push my current repository to bitbucket), is creating an empty repository at the server and then pushing a local repository to the new remote repository ok? Is that the source of the searching for changes problem? Any help in this regard would be greatly appreciated.

    Read the article

  • Building Active Record Conditions in an array - private method 'scan' called error

    - by Nick
    Hi, I'm attempting to build a set of conditions dynamically using an array as suggested in the first answer here: http://stackoverflow.com/questions/1658990/one-or-more-params-in-model-find-conditions-with-ruby-on-rails. However I seem to be doing something incorrectly and I'm not sure if what I'm trying is fundamentally unsound or if I'm simply botching my syntax. I'm simplifying down to a single condition here to try to illustrate the issue as I've tried to built a simple Proof of concept along these lines before layering on the 5 different condition styles I'm contending with. This works: excluded.push 12 excluded.push 30 @allsites = Site.all(:conditions => ["id not in (?)", excluded]) This results in a private method 'scan' called error: excluded.push 12 excluded.push 30 conditionsSet << ["id not in (?)", excluded] @allsites = Site.all(:conditions => conditionsSet) Thanks for any advice. I wasn't sure if the proper thing was to put this as a followup item to the related question/answers I noted at the top. Since I've got a problem not an answer. If there is a better way to post this related to the existing post please let me know.

    Read the article

  • How do you hidden/show particular context menu item in flex?

    - by R.Vijayakumar
    var contextMenu:ContextMenu = new ContextMenu(); contextMenu.hideBuiltInItems(); var contactList : ContextMenuItem = new ContextMenuItem("Add to Existing List"); contactList.addEventListener(ContextMenuEvent.MENU_ITEM_SELECT, doStaticListCommand); var newContactList : ContextMenuItem = new ContextMenuItem("Add a New List"); newContactList.addEventListener(ContextMenuEvent.MENU_ITEM_SELECT, doNewStaticListCommand); var removeContactList : ContextMenuItem = new ContextMenuItem("Remove contact from List"); removeContactList.addEventListener(ContextMenuEvent.MENU_ITEM_SELECT, doRemoveListCommand); var deletecontact:ContextMenuItem = new ContextMenuItem("Delete contact"); deletecontact.addEventListener(ContextMenuEvent.MENU_ITEM_SELECT, dodeleteconactCommand); var TimeList : ContextMenuItem = new ContextMenuItem("Add Time Spent"); TimeList.addEventListener(ContextMenuEvent.MENU_ITEM_SELECT, doTimeListCommand); contextMenu.customItems.push(contactList); contextMenu.customItems.push(newContactList); contextMenu.customItems.push(deletecontact); contextMenu.customItems.push(removeContactList); In flex i done contex menu , if i rigt click then show context menu item but i want to hidden particular context menu item in list , is it possiable hidden and show particular items in context menu ? please refer me

    Read the article

  • Google Anlytics event not firing

    - by yar1
    I am not getting and event data in GA. I installed Google Analytics Debugger Chrome extension and I see nothing happening (same goes when looking at Network panel in developer tools). I Googled it and read many (many) other answers and it looks like I'm doing things right. Page views, etc. are registering correctly... I have this code as the last thing before my closing tag: <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-MYREALCODE', 'mybna.net'); ga('send', 'pageview'); var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-MYREALCODE']); _gaq.push(['_trackPageview']); </script> My event handlers are done using jQuery, all inside an external js file, loaded before the closing tag. For example: $(function () { $('#show-less').click(function (e) { pbr.showHideMore(e); _gaq.push(['_trackEvent', 'ShowMore', 'Hide', 'top button']); }); }); Any ideas anyone?

    Read the article

  • Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN

    - by ScottGu
    This morning we released a major set of updates to Windows Azure.  These updates included: Web Sites: General Availability Release of Windows Azure Web Sites with SLA Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines) MSDN: No more credit card requirement for sign-up All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Web Sites: General Availability Release of Windows Azure Web Sites I’m incredibly excited to announce the General Availability release of Windows Azure Web Sites. The Windows Azure Web Sites service is perfect for hosting a web presence, building customer engagement solutions, and delivering business web apps.  Today’s General Availability release means we are taking off the “preview” tag from the Free and Standard (formerly called reserved) tiers of Windows Azure Web Sites.  This means we are providing: A 99.9% monthly SLA (Service Level Agreement) for the Standard tier Microsoft Support available on a 24x7 basis (with plans that range from developer plans to enterprise Premier support) The Free tier runs in a shared compute environment and supports up to 10 web sites. While the Free tier does not come with an SLA, it works great for rapid development and testing and enables you to quickly spike out ideas at no cost. The Standard tier, which was called “Reserved” during the preview, runs using dedicated per-customer VM instances for great performance, isolation and scalability, and enables you to host up to 500 different Web sites within them.  You can easily scale your Standard instances on-demand using the Windows Azure Management Portal.  You can adjust VM instance sizes from a Small instance size (1 core, 1.75GB of RAM), up to a Medium instance size (2 core, 3.5GB of RAM), or Large instance (4 cores and 7 GB RAM).  You can choose to run between 1 and 10 Standard instances, enabling you to easily scale up your web backend to 40 cores of CPU and 70GB of RAM: Today’s release also includes general availability support for custom domain SSL certificate bindings for web sites running using the Standard tier. Customers will be able to utilize certificates they purchase for their custom domains and use either SNI or IP based SSL encryption. SNI encryption is available for all modern browsers and does not require an IP address.  SSL certificates can be used for individual sites or wild-card mapped across multiple sites (we charge extra for the use of a SSL cert – but the fee is per-cert and not per site which means you pay once for it regardless of how many sites you use it with).  Today’s release also includes the following new features: Auto-Scale support Today’s Windows Azure release adds preview support for Auto-Scaling web sites.  This enables you to setup automatic scale rules based on the activity of your instances – allowing you to automatically scale down (and save money) when they are below a CPU threshold you define, and automatically scale up quickly when traffic increases.  See below for more details. 64-bit and 32-bit mode support You can now choose to run your standard tier instances in either 32-bit or 64-bit mode (previously they only ran in 32-bit mode).  This enables you to address even more memory within individual web applications. Memory dumps Memory dumps can be very useful for diagnosing issues and debugging apps. Using a REST API, you can now get a memory dump of your sites, which you can then use for investigating issues in Visual Studio Debugger, WinDbg, and other tools. Scaling Sites Independently Prior to today’s release, all sites scaled up/down together whenever you scaled any site in a sub-region. So you may have had to keep your proof-of-concept or testing sites in a separate sub-region if you wanted to keep them in the Free tier. This will no longer be necessary.  Windows Azure Web Sites can now mix different tier levels in the same geographic sub-region. This allows you, for example, to selectively move some of your sites in the West US sub-region up to Standard tier when they require the features, scalability, and SLA of the Standard tier. Full pricing details on Windows Azure Web Sites can be found here.  Note that the “Shared Tier” of Windows Azure Web Sites remains in preview mode (and continues to have discounted preview pricing).  Mobile Services: General Availability Release of Windows Azure Mobile Services I’m incredibly excited to announce the General Availability release of Windows Azure Mobile Services.  Mobile Services is perfect for building scalable cloud back-ends for Windows 8.x, Windows Phone, Apple iOS, Android, and HTML/JavaScript applications.  Customers We’ve seen tremendous adoption of Windows Azure Mobile Services since we first previewed it last September, and more than 20,000 customers are now running mobile back-ends in production using it.  These customers range from startups like Yatterbox, to university students using Mobile Services to complete apps like Sly Fox in their spare time, to media giants like Verdens Gang finding new ways to deliver content, and telcos like TalkTalk Business delivering the up-to-the-minute information their customers require.  In today’s Build keynote, we demonstrated how TalkTalk Business is using Windows Azure Mobile Services to deliver service, outage and billing information to its customers, wherever they might be. Partners When we unveiled the source control and Custom API features I blogged about two weeks ago, we enabled a range of new scenarios, one of which is a more flexible way to work with third party services.  The following blogs, samples and tutorials from our partners cover great ways you can extend Mobile Services to help you build rich modern apps: New Relic allows developers to monitor and manage the end-to-end performance of iOS and Android applications connected to Mobile Services. SendGrid eliminates the complexity of sending email from Mobile Services, saving time and money, while providing reliable delivery to the inbox. Twilio provides a telephony infrastructure web service in the cloud that you can use with Mobile Services to integrate phone calls, text messages and IP voice communications into your mobile apps. Xamarin provides a Mobile Services add on to make it easy building cross-platform connected mobile aps. Pusher allows quickly and securely add scalable real-time messaging functionality to Mobile Services-based web and mobile apps. Visual Studio 2013 and Windows 8.1 This week during //build/ keynote, we demonstrated how Visual Studio 2013, Mobile Services and Windows 8.1 make building connected apps easier than ever. Developers building Windows 8 applications in Visual Studio can now connect them to Windows Azure Mobile Services by simply right clicking then choosing Add Connected Service. You can either create a new Mobile Service or choose existing Mobile Service in the Add Connected Service dialog. Once completed, Visual Studio adds a reference to Mobile Services SDK to your project and generates a Mobile Services client initialization snippet automatically. Add Push Notifications Push Notifications and Live Tiles are a key to building engaging experiences. Visual Studio 2013 and Mobile Services make it super easy to add push notifications to your Windows 8.1 app, by clicking Add a Push Notification item: The Add Push Notification wizard will then guide you through the registration with the Windows Store as well as connecting your app to a new or existing mobile service. Upon completion of the wizard, Visual Studio will configure your mobile service with the WNS credentials, as well as add sample logic to your client project and your mobile service that demonstrates how to send push notifications to your app. Server Explorer Integration In Visual Studio 2013 you can also now view your Mobile Services in the the Server Explorer. You can add tables, edit, and save server side scripts without ever leaving Visual Studio, as shown on the image below: Pricing With today’s general availability release we are announcing that we will be offering Mobile Services in three tiers – Free, Standard, and Premium.  Each tier is metered using a simple pricing model based on the # of API calls (bandwidth is included at no extra charge), and the Standard and Premium tiers are backed by 99.9% monthly SLAs.  You can elastically scale up or down the number of instances you have of each tier to increase the # of API requests your service can support – allowing you to efficiently scale as your business grows. The following table summarizes the new pricing model (full pricing details here):   You can find the full details of the new pricing model here. Build Conference Talks The //BUILD/ conference will be packed with sessions covering every aspect of developing connected applications with Mobile Services. The best part is that, even if you can’t be with us in San Francisco, every session is being streamed live. Be sure not to miss these talks: Mobile Services – Soup to Nuts — Josh Twist Building Cross-Platform Apps with Windows Azure Mobile Services — Chris Risner Connected Windows Phone Apps made Easy with Mobile Services — Yavor Georgiev Build Connected Windows 8.1 Apps with Mobile Services — Nick Harris Who’s that user? Identity in Mobile Apps — Dinesh Kulkarni Building REST Services with JavaScript — Nathan Totten Going Live and Beyond with Windows Azure Mobile Services — Kirill Gavrylyuk , Paul Batum Protips for Windows Azure Mobile Services — Chris Risner AutoScale: Dynamically scale up/down your app based on real-world usage One of the key benefits of Windows Azure is that you can dynamically scale your application in response to changing demand. In the past, though, you have had to either manually change the scale of your application, or use additional tooling (such as WASABi or MetricsHub) to automatically scale your application. Today, we’re announcing that AutoScale will be built-into Windows Azure directly.  With today’s release it is now enabled for Cloud Services, Virtual Machines and Web Sites (Mobile Services support will come soon). Auto-scale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance. Once configured it will regularly adjust the number of instances running in response to the load in your application. Currently, we support two different load metrics: CPU percentage Storage queue depth (Cloud Services and Virtual Machines only) We’ll enable automatic scaling on even more scale metrics in future updates. When to use Auto-Scale The following are good criteria for services/apps that will benefit from the use of auto-scale: The service/app can scale horizontally (e.g. it can be duplicated to multiple instances) The service/app load changes over time If your app meets these criteria, then you should look to leverage auto-scale. How to Enable Auto-Scale To enable auto-scale, simply navigate to the Scale tab in the Windows Azure Management Portal for the app/service you wish to enable.  Within the scale tab turn the Auto-Scale setting on to either CPU or Queue (for Cloud Services and VMs) to enable Auto-Scale.  Then change the instance count and target CPU settings to configure the Auto-Scale ranges you want to maintain. The image below demonstrates how to enable Auto-Scale on a Windows Azure Web-Site.  I’ve configured the web-site so that it will run using between 1 and 5 VM instances.  The exact # used will depend on the aggregate CPU of the VMs using the 40-70% range I’ve configured below.  If the aggregate CPU goes above 70%, then Windows Azure will automatically add new VMs to the pool (up to the maximum of 5 instances I’ve configured it to use).  If the aggregate CPU drops below 40% then Windows Azure will automatically start shutting down VMs to save me money: Once you’ve turned auto-scale on, you can return to the Scale tab at any point and select Off to manually set the number of instances. Using the Auto-Scale Preview With today’s update you can now, in just a few minutes, have Windows Azure automatically adjust the number of instances you have running  in your apps to keep your service performant at an even better cost. Auto-scale is being released today as a preview feature, and will be free until General Availability. During preview, each subscription is limited to 10 separate auto-scale rules across all of the resources they have (Web sites, Cloud services or Virtual Machines). If you hit the 10 limit, you can disable auto-scale for any resource to enable it for another. Alerts and Notifications Starting today we are now providing the ability to configure threshold based alerts on monitoring metrics. This feature is available for compute services (cloud services, VM, websites and mobiles services). Alerts provide you the ability to get proactively notified of active or impending issues within your application.  You can define alert rules for: Virtual machine monitoring metrics that are collected from the host operating system (CPU percentage, network in/out, disk read bytes/sec and disk write bytes/sec) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. Cloud service monitoring metrics that are collected from the host operating system (same as VM), monitoring metrics from the guest VM (from performance counters within the VM) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. For Web Sites and Mobile Services, alerting rules can be configured on monitoring metrics from monitoring endpoint urls (response time and uptime) that you have configured. Creating Alert Rules You can add an alert rule for a monitoring metric by navigating to the Setting -> Alerts tab in the Windows Azure Management Portal. Click on the Add Rule button to create an alert rule. Give the alert rule a name and optionally add a description. Then pick the service which you want to define the alert rule on: The next step in the alert creation wizard will then filter the monitoring metrics based on the service you selected:   Once created the rule will show up in your alerts list within the settings tab: The rule above is defined as “not activated” since it hasn’t tripped over the CPU threshold we set.  If the CPU on the above machine goes over the limit, though, I’ll get an email notifying me from an Windows Azure Alerts email address ([email protected]). And when I log into the portal and revisit the alerts tab I’ll see it highlighted in red.  Clicking it will then enable me to see what is causing it to fail, as well as view the history of when it has happened in the past. Alert Notifications With today’s initial preview you can now easily create alerting rules based on monitoring metrics and get notified on active or impending issues within your application that require attention. During preview, each subscription is limited to 10 alert rules across all of the services that support alert rules. No More Credit Card Requirement for MSDN Subscribers Earlier this month (during TechEd 2013), Windows Azure announced that MSDN users will get Windows Azure Credits every month that they can use for any Windows Azure services they want. You can read details about this in my previous Dev/Test blog post. Today we are making further updates to enable an easier Windows Azure signup for MSDN users. MSDN users will now not be required to provide payment information (e.g. no credit card) during sign-up, so long as they use the service within the included monetary credit for the billing period. For usage beyond the monetary credit, they can enable overages by providing the payment information and remove the spending limit. This enables a super easy, one page sign-up experience for MSDN users.  Simply sign-up for your Windows Azure trial using the same Microsoft ID that you use to manage your MSDN account, then complete the one page sign-up form below and you will be able to spend your free monthly MSDN credits (up to $150 each month) on any Windows Azure resource for dev/test:   This makes it trivially easy for every MDSN customer to start using Windows Azure today.  If you haven’t signed up yet, I definitely recommend checking it out. Summary Today’s release includes a ton of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Stop Spinning Your Wheels&hellip; Sage Advice for Aspiring Developers

    - by Mark Rackley
    So… lately I’ve been tasked with helping bring some non-developers over the hump and become full-fledged, all around, SharePoint developers. Well, only time will tell if I’m successful or a complete failure. Good thing about failures though, you know what NOT to do next time! Anyway, I’ve been writing some sort of code since I was about 10 years old; so I sometimes take for granted the effort some people have to go through to learn a new technology. I guess if I had to say I was an “expert” in one thing it would be learning (and getting “stuff” done) in new technologies. Maybe that’s why I’ve embraced SharePoint and the SharePoint community. SharePoint is the first technology I haven’t been able to master or get everything done without help from other people. I KNOW I’ll never know it all and I learn something new every day.  It keeps it interesting, it keeps me motivated, and keeps me involved. So, what some people may consider a downside of SharePoint, I definitely consider a plus. Crap.. I’m rambling. Where was I? Oh yeah… me trying to be helpful. Like I said, I am able to quickly and effectively pick up new languages, technology, etc. and put it to good use. Am I just brilliant? Well, my mom thinks so.. but maybe not. Maybe I’ve just been doing it for a long time…. 25 years in some form or fashion… wow I’m old… Anyway, what I lack in depth I make up for in breadth and being the “go-to” guy wherever I work when someone needs to “get stuff done”.  Let’s see if I can take some of that experience and put it to practical use to help new people get up to speed faster, learn things more effectively, and become that go-to guy. First off…  make sure you… Know The Basics I don’t have the time to teach new developers the basics, but you gotta know them. I’ve only been “taught” two languages.. Fortran 77 and C… everything else I’ve picked up from “doing”. I HAD to know the basics though, and all new developers need to understand the very basics of development.  97.23% of all languages will have the following: Variables Functions Arrays If statements For loops / While loops If you think about it, most development is “if this, do this… or while this, do this…”.  “This” may be some unique method to your language or something you develop, but the basics are the basics. YES there are MANY other development topics you need to understand, but you shouldn’t be scratching your head trying to figure out what a ”for loop” is… (Also learn about classes and hashtables as quickly as possible). Once you have the basics down it makes it much easier to… Learn By Doing This may just apply to me and my warped brain.  I don’t learn a new technology by reading or hearing someone speak about it. I learn by doing. It does me no good to try and learn all of the intricacies of a new language or technology inside-and-out before getting my hands dirty. Just show me how to do one thing… let me get that working… then show me how to do the next thing.. let me get that working… Now, let’s see what I can figure out on my own. Okay.. now it starts to make sense. I see how the language works, I can step through the code, and before you know it.. I’m productive in a new technology. Be careful here though…. make sure you… Don’t Reinvent The Wheel People have been writing code for what… 50+ years now? So, why are you trying to tackle ANYTHING without first Googling it with Bing to see what others have done first? When I was first learning C# (I had come from a Java background) I had to call a web service.  Sure! No problem! I’d done this many times in Java. So, I proceeded to write an HTTP Handler, called the Web Service and it worked like a charm!!!  Probably about 2.3 seconds after I got it working completely someone says to me “Why didn’t you just add a Web Reference?” Really? You can do that?  oops… I just wasted a lot of time. Before undertaking the development of any sort of utility method in a new language, make sure it’s not already handled for you… Okay… you are starting to write some code and are curious about the possibilities? Well… don’t just sit there… Try It And See What Happens This is actually one of my biggest pet peeves. “So… ‘x++’ works in C#, but does it also work in JavaScript?”   Really? Did you just ask me that? In the time it spent for you to type that email, press the send button, me receive the email, get around to reading it, and replying with “yes” you could have tested it 47 times and know the answer! Just TRY it! See what happens! You aren’t doing brain surgery. You aren’t going to kill anyone, and you BETTER not be developing in production. So, you are not going to crash any production systems!! Seriously! Get off your butt and just try it yourself. The extra added benefit is that it doesn’t work, the absolute best way to learn is to… Learn From Your Failures I don’t know about you… but if I screw up and something doesn’t work, I learn A LOT more debugging my problem than if everything magically worked. It’s okay that you aren’t perfect! Not everyone can be me? In the same vein… don’t ask someone else to debug your problem until you have made a valiant attempt to do so yourself. There’s nothing quite like stepping through code line by line to see what it’s REALLY doing… and you’ll never feel more stupid sometimes than when you realize WHY it’s not working.. but you realize... you learn... and you remember. There is nothing wrong with failure as long as you learn from it. As you start writing more and more and more code make sure that you ALWAYS… Develop for Production You will soon learn that the “prototype” you wrote last week to show as a “proof of concept” is going to go directly into production no matter how much you beg and plead and try to explain it’s not ready to go into production… it’s going to go straight there.. and it’s like herpes.. it doesn’t go away and there’s no fixing it once it’s in there.  So, why not write ALL your code like it will be put in production? It MIGHT take a little longer, but in the long run it will be easier to maintain, get help on, and you won’t be embarrassed that it’s sitting on a production server for everyone to use and see. So, now that you are getting comfortable and writing code for production it is important to to remember the… KISS Principle… Learn It… Love It… Keep It Simple Stupid Seriously.. don’t try to show how smart you are by writing the most complicated code in history. Break your problem up into discrete steps and write each step. If it turns out you have some redundancy, you can always go back and tweak your code later.  How bad is it when you write code that LOOKS cocky? I’ve seen it before… some of the most abstract and complicated classes when a class wasn’t even needed! Or the most elaborate unreadable code jammed into one really long line when it could have been written in three lines, performed just as well, and been SOOO much easier to maintain. Keep it clear and simple.. baby steps people. This will help you learn the technology, debug problems, AND it will help others help you find your problems if they don’t have to decipher the Dead Sea Scrolls just to figure out what you are trying to do…. Really.. don’t be that guy… try to curb your ego and… Keep an Open Mind No matter how smart you are… how fast you type… or how much you get paid, don’t let your ego get in the way. There is probably a better way to do everything you’ve ever done. Don’t become so cocky that you can’t think someone knows more than you. There’s a lot of brilliant, helpful people out there willing to show you tricks if you just give them a chance. A very super-awesome developer once told me “So what if you’ve been writing code for 10 years or more! Does your code look basically the same? Are you not growing as a developer?” Those 10 years become pretty meaningless if you just “know” that you are right and have not picked up new tips, tricks, methods, and patterns along the way. Learn from others and find out what’s new in development land (you know you don’t have to specifically use pointers anymore??). Along those same lines… If it’s not working, first assume you are doing something wrong. You have no idea how much it annoys people who are trying to help you when you first assume that the help they are trying to give you is wrong. Just MAYBE… you… the person learning is making some small mistake? Maybe you didn’t describe your problem correctly? Maybe you are using the wrong terminology? “I did exactly what you said and it didn’t work.”  Oh really? Are you SURE about that? “Your solution doesn’t work.”  Well… I’m pretty sure it works, I’ve used it 200 times… What are you doing differently? First try some humility and appreciation.. it will go much further, especially when it turns out YOU are the one that is wrong. When all else fails…. Try Professional Training Some people just don’t have the mindset to go and figure stuff out. It’s a gift and not everyone has it. If everyone could do it I wouldn’t have a job and there wouldn’t be professional training available.  So, if you’ve tried everything else and no light bulbs are coming on, contact the experts who specialize in training. Be careful though, there is bad training out there. Want to know the names of some good places? Just shoot me a message and I’ll let you know. I’m boycotting endorsing Andrew Connell anymore until I get that free course dangit!! So… that’s it.. that’s all I got right now. Maybe you thought all of this is common sense, maybe you think I’m smoking crack. If so, don’t just sit there, there’s a comments section for a reason. Finally, what about you? What tips do you have to help this aspiring to learn the dark arts??

    Read the article

  • e-interview: SunSpace to WebCenter migration

    - by me
    I had the pleasure to do an e-interview with Ana Neves around the SunSpace to WebCenter migration project.  Below is the english version of the interview.  Enjoy   Peter, you joined Oracle in 2009 through the acquisition of Sun. Becoming a part of Oracle meant many changes. The internal collaboration platform was one of them, as per a post you wrote back in 2011. Sun had SunSpace. How would you describe SunSpace? SunSpace was the internal Community and Social Collaboration platform for the Sun's Global Sales and Services Organization. SunSpace served around 600 communities with a main focus around technology, products and services. SunSpace was a big success. Within 3 months of its launch SunSpace had over 20,000 users and it won the Atlassian "Not just another wiki" Award for the best use of Confluence (https://blogs.oracle.com/peterreiser/entry/goodbye_sunspace_hello_webcenter). What made SunSpace so special? 1. People centric versus  Web centric The main concept of SunSpace put the person in the middle of everything. All relevant information, resources  etc. where dynamically pushed to a person's  myProfile ( Facebook like interface) based on the person's interest and  needs.  2. Ease to use  SunSpace was really easy to use. We spent a lot of time on social interaction design to optimize the user experience.  Also we integrated some sophisticated technology to hide complexity from the user. As example - when a user added a document to SunSpace - we analyzed the content of the document and suggested related metadata and tags to the user based on a sophisticated algorithm which was integrated with the corporate taxonomy. Based on this metadata the document was automatically shared with the relevant communities.  3. Easy to find One of the main use cases for SunSpace was that  a user could quickly find the content and information they needed for their job.  The search implementation was based on:  optimized search engine algorithm using social value based ranking enhancements community facilitated search optimization  faceted search which recommended highly relevant  content like products, communities and experts 4. Social Adoption  - How to build vibrant communities You can deploy the coolest social technology but what if the users are not using it?   To drive user adoption we implemented two  complementary models: 4.1 Community Methodology  We developed a set of best practices on how to create, run and sustain communities including: community structure and types (e.g. Community of Practice, Community of Interest etc.) & tips and tricks on how to build a "vibrant " communities, Community Health check etc.  These best practices where constantly tuned and updated by the community of community drivers. 4.2. Social Value System To drive user adoption there is ONE key  question you  have to answer for each individual user: What's In It For Me (WIIFM) We developed a Social Value System called Community Equity which measures the social value flow between People, Content and Metadata. Based on this technology we added "Gamfication" techniques (although at that time this term did not exist ) to SunSpace to honor people for the active contribution and participation.  As example: All  social credentials a user earned trough active community participation where dynamically displayed on her/his myProfile. How would you describe WebCenter? Oracle WebCenter (@oraclewebcenter) is the Oracle's  user engagement platform for social business. It helps people work together more efficiently through contextual collaboration tools that optimize connections between people, information, and applications and ensures users have access to the right information in the context of the business process in which they are engaged. Oracle WebCenter can help your organization deliver contextual and targeted Web experiences to users and enable employees to access information and applications through intuitive portals, composite applications, and mash-ups. How does it compare to SunSpace in terms of functionality? Before I answer this question, I would like to point out some limitation we started to see with the current SunSpace implementation. Due to the massive growth of the user population (>20,000 users), we experienced  performance and scalability challenges with the current technology. Also at the time - Sun Internal Communications and SunIT planned to replace the entire Sun Intranet with SunSpace. We  kicked-off a project to evaluate the enterprise level technology which eventually would replace the good old static Intranet.  And then Oracle acquired Sun. We already had defined the functional requirements for the Intranet replacement with a Social Enterprise Stack and we just needed to evaluate the functional requirements against WebCenter   Below are the summary of this evaluation  MyProfile SunSpace WebCenter How WebCenter Works Home MyProfile: to access, click on your name at the top of any WebCenter page Your name, title, and reporting line are displayed.  Sub-tabs show your activity stream (Activities); people in your network (Connections); files you have uploaded (Documents); your contact information (Organization); and any personal information you wish to share (About).   Files MyFiles Allows you to upload, download and store documents or wiki pages within folders and subfolders.  The WebDav interface allows you to download / upload files / folders with a simple drag and drop to / from your local machine.  Tagging is supported and recommended. Network HomeMyConnections Home: displays the activity stream of individuals in your network.MyConnections: shows individuals in your network.  Click on a person's name to see their contact info and link to their profile. Status Updates MyProfle > Activties Add and displays  your recent activties and status updates. Watches Preferences > Subscriptions > Current Subscriptions Receive email notifications when  pages / spaces you watch are modified. Drafts N/A WebCenter does not support Drafts Settings Preferences: to access, click on 'Preferences' at the top of any WebCenter page Set your general preferences, as well as your WebCenter messaging, search and mail settings. MyCommunities MySpaces: to access, click on 'Spaces' at the top of any WebCenter page Displays MySpaces (communities you are a member of); and Recent Spaces (communities you have recently visited). Community SunSpace Webcenter How Webcenter Works Home Home Displays a community introduction and activity stream.  Members can add messages, links or documents via the Community Message Board. No Top Contributors widget. People Members Lists members of the community. The Mail All Members feature allows moderators and participants to send a message to all members of the community. Membership Management can be found under > Manage > Members News News Members can post and access latest community news and they can subscribe to news using an RSS reader Documents Documents Allows community members to upload, download and store documents or wiki pages within folders and subfolders.  The WebDav interface allows participants to download / upload files / folders with a simple drag and drop to / from your local machine.  Tagging is supported and recommended. Wiki Wiki Allows community members to create and update web pages with a WYSIWYG editor.  Note: WebCenter does not support macros or portlet embedding. Forum Forum Post community forum topics. Contribute to community forum conversations.  N/A Calendar Update and/or view the Community Calendar. N/A Analytics Displays detailed analytics data (views,downloads, unique users etc.) for Pages, Wiki, Documents, and Forum in a given community space. What is the adoption of WebCenter at Oracle? The entire Intranet serving around 100,000 users  is running on WebCenter Content.  For professional communities we use WebCenter Portal and Spaces. Currently we have around 6,000 community spaces with  around 40,000 members.  Does Oracle have any metrics to assess usage and impact of WebCenter? Can you give us some examples? Sure -  we have a lot of metrics   For the Intranet we use traditional metrics like pageviews, monthly unique visitors and unique visits.  For Communities we use the WebCenter Portal/Spaces analytics service which gives as a wealth of data. The key metrics we track are: Space traffic (PageViews, Unique Users) Wiki,Documents (views, downloads etc.) Forum (users, views, posts etc.) Registered members over time  Depending on the community we can filter/segment the metrics by User Properties e.g. Country, Organization, Job Role etc. What are you doing to improve usage and impact? 1. We  integrating the WebCenter social services/fabric into all  main business applications. As example The Fusion CRM deployment is seamless integrated with Oracle Social Network (OSN) and all conversation around an opportunity or customer engagement is  done in OSN (see youtube video). 2. We drive Social Best Practice trough a program called "Social Networking & Business Collaboration (SNBC) program" You worked both with WebCenter and SunSpace. Knowing what you know today, if you had the chance to choose between the two, which one would you choose? Why? That's a tricky question   In the early days of  the Social Enterprise implementation (we started SunSpace in 2006), we needed an agile and easy to deploy technology to keep up with the users requirements. Sometimes we pushed two releases per day  and we were in a permanent perpetual beta mode - SunSpace was perfect for that.  After the social implementation matured over time - community generated content became business critical and we saw a change in the  requirements from agile to stability, scalability and reliability  of the infrastructure.  WebCenter is the right choice for such an enterprise-level deployment.  You are a WebCenter Evangelist at Oracle. What do you do as part of that role? Our  role is to help position Oracle as one of the key thought leaders and solutions provider for Social Business. In addition we drive social innovation trough our Oracle Appslab  team. Is that a full time role? Yes  How many other Evangelists are there in Oracle? We are currently 5 people in the WebCenter evangelist team (@webcentervoices): Christian Finn (@cfinn) leads the team - Christian came from the Microsoft Sharepoint product management team and is a recognized expert in Social Business and Enterprise Collaboration. Noël Jaffré  (@noeljaffre) is our Web Experience Management (WEM) guru and came to Oracle via FatWire acquisition (now WebCenter Sites). Jake Kuramoto (@theapplab) is part of the Oracle AppsLab innovation  team - Jake is well known as  the driving force behind  http://theappslab.com  a blog around social and innovation.  Noel Portugal (@noelportugal) is a developer in the Oracle AppsLab innovation team - he is the inventor of OraTweet - Oracle's internal tweeting platform  Peter Reiser (@peterreiser) is  a Social Business guru and the inventor of SunSpace and Community Equity.  What area of the business do you and the rest of the Evangelists sit in? What area of the organisation is responsible for WebCenter? We are part of the WebCenter product management  organization.  Is WebCenter part of the Knowledge Management strategy? Oracle WebCenter is the Oracle's user engagement platform for social business. It brings together the most complete portfolio of portal, web experience management, content, social and collaboration technologies into a single product suite and is the product foundation of the Oracle Knowledge Management strategy.  I am aware Oracle also uses Beehive internally. How would you describe Beehive? Oracle Beehive provides an integrated set of communication and collaboration services built on a single scalable, secure, enterprise-class platform Beehive is  internally used for enterprise wide mail, calendar and real collaboration (Web conferencing) services.  Are Beehive and WebCenter connected? Historically Beehive and WebCenter Portal & Content had some overlap in functionally. (Hey - if  a company has an acquisition strategy to strengthen its product offering and accelerate  innovation, it's pretty normal that functional overlap exists  :- )) A key objective of the WebCenter strategy is  to combine all social and collaboration offerings under the WebCenter product family. That means that certain Beehive components  will be integrated into the overall WebCenter product offering.  Are there any other internal collaboration tools at Oracle? Which ones There here are two other main social tools which are widely used at Oracle  Oracle Connect was the first social tool the Oracle AppsLab team created in 2007 - see (Jake's blog post for details). It is still extensively used. ... and as a former Sun guy I like this quote from the blog post:  "Traffic to Connect peaked right after the Sun merger in 2010, when it served several hundred thousand pageviews each month; since then, traffic has subsided, but still averages tens of thousands of pageviews to several thousand users each month." Oratweet - Oracle internal microblogging platform has been used since June 2008 and it is still growing.  It's entirely written in Oracle Application Express (APEX) which is a rapid web application development tool for the Oracle database. Wanna try it out? Here you can download the code.  What is Oracle's strategy regarding (all these) collaboration tools? Pretty straight forward. The strategy is to seamless  integrate the WebCenter social & collaboration services into all Business Applications to help customers to socialize their enterprise. 

    Read the article

  • Is a Hyperthreaded CPU more powerful and more efficient than a Dual-core CPU? [closed]

    - by user1811864
    which computer to choose with Pentium processor hello they are getting rid of the old computer equipment in the office and i have to choose the computer to take home i get first choice to pick. -15 inch lcd screen 4 gb of ram core 2 duo dual Core E8400 3.00 GHz dvd writer windows vista/ linux -15 inch crt monitor with 2 gb ram and pentium 4 2 ghz single core HT technology windows xp hardisks both 250 GB my friend is telling me to choose the second one Pentium single core HT because he told me it runs faster becuase of HT technology and cooler and consumes less current electricity so it wont get overheated because it has HT technology so it's high definition for encoding and watching HD movies and HD sound and is like a gaming pc to play internet games. And also he said the dual core 8400 runs at 3 ghz compared to the 2 ghz so it heats very much because of the two extra cores so it takes more current raising electricty bills and is not good for gaming and watching HD movies and internet flash animations and games because of getting heated everytime. And he wants to choose and take the E8400 because he has air conditioning at home so it will be safe from heating. So which one computer should i take is it really faster because of the HT High definition technology and will i be able to play internet flash card games better and watch good HD movies Youtube etc and play all the music and songs.

    Read the article

  • User Experience Fundamentals

    - by ultan o'broin
    Understanding what user experience means in the modern work environment is central to building great-looking usable applications on the desktop or mobile devices. What better place to start a series of blog posts on such Applications User Experience team enablement for customers and partners than by sharing what the term really means, writes team member Karen Scipi. Applications UX have gained valuable insights into developing a user experience that reflects the experience of today’s worker. We have observed real workers performing real tasks in real work environments, and we have developed a set of new standards of application design that have been scientifically proven to be beneficial to enable today’s workers. We share such expertise to enable our customers and partners to benefit from our insights and to further their return on investment when building Oracle applications. So, What is User Experience? ?The user interface (UI) is about the on-screen user context provided by the layout of widgets (such as icons, fields, and buttons and more) and the visual impact of colors, typographic choices, and so on. The UI comprises the “look and feel” of the application that users interact with, and reflects, in essence, the most immediate aspects of usability we can now all relate to.  User experience, on the other hand, is about understanding the whole context of the world of work, how workers go about completing tasks, crossing all sorts of boundaries along the way. It is a study of how business processes and workers goals coincide, how users work with technology or other tools to get their jobs done, their interactions with other users, and their response to the technical, physical, and cultural environment around them. User experience is all about how users work—their work environments, office layouts, desk tools, types of devices, their working day, and more. Even their job aids, such as sticky notes, offer insight for UX innovation. User experience matters because businesses needs to be efficient, work must be productive, and users now demand to be satisfied by the applications they work with. In simple terms, tasks finished quickly and accurately for a business evokes organization and worker satisfaction, which in turn makes workers feel good and more than willing to use the application again tomorrow. Design Principles for the Enterprise Worker The consumerization of information technology has raised the bar for enterprise applications. Applications must be consistent, simple, intuitive, but above all contextual, reflecting how and when workers work, in the office or on the go. For example, the Google search experience with its type-ahead keyword-prompting feature is how workers expect to be able to discover enterprise information, too. Type-ahead in PeopleSoft 9.1 To build software that enables workers to be productive, our design principles meet modern work requirements about consistency, with well-organized, context-driven information, geared for a working world of discovery and collaboration. Our applications must also behave in a simple, web-like way just like Amazon, Google, and Apple products that workers use at home or on the go. Our user experience must also reflect workers’ needs for flexibility and well-loved enterprise practices such as using popular desktop tools like Microsoft Excel or Outlook as required. Building User Experience Productively The building blocks of Oracle Fusion Applications are the user experience design patterns. Based on the Oracle Fusion Middleware technology used to build Oracle Fusion Applications, the patterns are reusable solutions to common usability challenges that ADF developers typically face as they build applications, extensions, and integrations. Developers use the patterns as part of their Oracle toolkits to realize great usability consistently and in a productive way. Our design pattern creation process is informed by user experience research and science, an understanding of our technology’s capabilities, the demands for simplification and intuitiveness from users, and the best of Oracle’s acquisitions strategy (an injection of smart people and smart innovation). The patterns are supported by usage guidelines and are tested in our labs and assembled into a library of proven resources we used to build own Oracle Fusion Applications and other Oracle applications user experiences. The design patterns library is now available to the ADF community and to our partners and customers, for free. Developers with ADF skills and other technology skills can now offer more than just coding and functionality and still use the best in enterprise methodologies to ensure that a great user experience is easily applied, scaled, and maintained, whether it be for SaaS or on-premise deployments for Oracle Fusion Applications, for applications coexistence, or for partner integrations scenarios.  Oracle partners and customers already using our design patterns to build solutions and win business in smart and productive ways are now sharing their experiences and insights on pattern use to benefit your entire business. Applications UX is going global with the message and the means. Our hands-on user experience enablement through ADF  is expanding. So, stay tuned to Misha Vaughan's Voice of User Experience (VOX) blog and follow along on Twitter at @usableapps for news of outreach events and other learning opportunities. Interested in Learning More? Oracle Fusion Applications User Experience Patterns and Guidelines Library Shout-outs for Oracle UX Design Patterns Oracle Fusion Applications User Experience Design Patterns: Productivity Realized

    Read the article

  • PeopleSoft at Alliance 2012 Executive Forum

    - by John Webb
    Guest Posting From Rebekah Jackson This week I jointed over 4,800 Higher Ed and Public Sector customers and partners in Nashville at our annual Alliance conference.   I got lost easily in the hallways of the sprawling Gaylord Opryland Hotel. I carried the resort map with me, and I would still stand for several minutes at a very confusing junction, studying the map and the signage on the walls. Hallways led off in many directions, some with elevators going down here and stairs going up there. When I took a wrong turn I would instantly feel stuck, lose my bearings, and occasionally even have to send out a call for help.    It strikes me that the theme for the Executive Forum this year outlines a less tangible but equally disorienting set of challenges that our higher education customer’s CIOs are facing: Making Decisions at the Intersection of Business Value, Strategic Investment, and Enterprise Technology. The forces acting upon higher education institutions today are not neat, straight-forward decision points, where one can glance to the right, glance to the left, and then quickly choose the best course of action. The operational, technological, and strategic factors that must be considered are complex, interrelated, messy…and the stakes are high. Michael Horn, co-author of “Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns”, set the tone for the day. He introduced the model of disruptive innovation, which grew out of the research he and his colleagues have done on ‘Why Successful Organizations Fail’. Highly simplified, the pattern he shared is that things start out decentralized, take a leap to extreme centralization, and then experience progressive decentralization. Using computers as an example, we started with a slide rule, then developed the computer which centralized in the form of mainframes, and gradually decentralized to mini-computers, desktop computers, laptops, and now mobile devices. According to Michael, you have more computing power in your cell phone than existed on the planet 60 years ago, or was on the first rocket that went to the moon. Applying this pattern to Higher Education means the introduction of expensive and prestigious private universities, followed by the advent of state schools, then by community colleges, and now online education. Michael shared statistics that indicate 50% of students will be taking at least one on line course by 2014…and by some measures, that’s already the case today. The implication is that technology moves from being the backbone of the campus, the IT department’s domain, and pushes into the academic core of the institution. Innovative programs are underway at many schools like Bellevue and BYU Idaho, joined by startups and disruptive new players like the Khan Academy.   This presents both threat and opportunity for higher education institutions, and means that IT decisions cannot afford to be disconnected from the institution’s strategic plan. Subsequent sessions explored this theme.    Theo Bosnak, from Attain, discussed the model they use for assessing the complete picture of an institution’s financial health. Compounding the issue are the dramatic trends occurring in technology and the vendors that provide it. Ovum analyst Nicole Engelbert, shared her insights next and suggested that incremental changes are no longer an option, instead fundamental changes are affecting the landscape of enterprise technology in higher ed.    Nicole closed with her recommendation that institutions focus on the trends in higher education with an eye towards the strategic requirements and business value first. Technology then is the enabler.   The last presentation of the day was from Tom Fisher, Sr. Vice President of Cloud Services at Oracle. Tom runs the delivery arm of the Cloud Services group, and shared his thoughts candidly about his experiences with cloud deployments as well as key issues around managing costs and security in cloud deployments. Okay, we’ve covered a lot of ground at this point, from financials planning, business strategy, and cloud computing, with the possibility that half of the institutions in the US might not be around in their current form 10 years from now. Did I forget to mention that was raised in the morning session? Seems a little hard to believe, and yet Michael Horn made a compelling point. Apparently 100 years ago, 8 of the top 10 education institutions in the world were German. Today, the leading German school is ranked somewhere in the 40’s or 50’s. What will the landscape be 100 years from now? Will there be an institution from China, India, or Brazil in the top 10? As Nicole suggested, maybe US parents will be sending their children to schools overseas much sooner, faced with the ever-increasing costs of a US based education. Will corporations begin to view skill-based certification from an online provider as a viable alternative to a 4 year degree from an accredited institution, fundamentally altering the education industry as we know it?

    Read the article

  • Global Perspective: Oracle AppAdvantage Does its Stage Debut in the UK

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Global Perspective is a monthly series that brings experiences, business needs and real-world use cases from regions across the globe. This month’s feature is a follow-up from last month’s Global Perspective note from a well known ACE Director based in EMEA. My first contribution to this blog was before Oracle Open World and I was quite excited about where this initiative would take me in my understanding of the value of Oracle Fusion Middleware. Rimi Bewtra from the Oracle AppAdvantage team came as promised to the Oracle ACE Director briefings and explained what this initiative was all about and I then asked the directors to take part in the new survey. The story was really well received and then at the SOA advisory board that many of these ACE Directors already take part in there was a further discussion on how this initiative will help customers understand the benefits of adoption. A few days later Rick Beers launched the program at a lunch of invited customer executives which included one from Pella who talked about their projects (a quick recap on that here). I wasn’t able to stay for the whole event but what really interested me was that these executives who understood the technology but where looking for how they could use them to drive their businesses. Lots of ideas were bubbling up in my head about how we can use this in user groups to help our members, and the timing was fantastic as just three weeks later we had UKOUG_Apps13, our flagship Applications conference in the UK. We had independently working with Oracle marketing in the UK on an initiative called Apps Transformation to help our members look beyond just the application they use today. We have had a Fusion community page but felt the options open are now much wider than Fusion Applications, there are acquired applications, social, mobility and of course the underlying technology, Oracle Fusion Middleware. I was really pleased to be allowed to give the Oracle AppAdvantage story as a session in our conference and we are planning a special Apps Transformation event in March where I hope the Oracle AppAdvantage team will take part and we will have the results of the survey to discuss. But, life also came full circle for me. In my first post, I talked about Andrew Sutherland and his original theory that Oracle Fusion Middleware adoption had technical drivers. Well, Andrew was a speaker at our event and he gave a potted, tech-talk free update on Oracle Open World. Andrew talked about the Prevailing Technology Winds, and what is driving this today and he talked about that in the past it was the move from simply automating processes (ERP etc), through the altering of those processes (SOA) and onto consolidation. The next drivers are around the need to predict, both faster and more accurately; how to better exploit the information that we have available. He went on to talk about The Nexus of Forces: Social, Mobile, Cloud and Information – harnessing these forces of change with Oracle technology. Gartner really likes this concept and if you want to know more you can get their paper here. All this has made me think, and I hope it will make you too. Technology can help us drive our businesses better and understanding your needs can be the first step on your journey, which was the theme of our event in the UK. I spoke to a number of the delegates and I hope to share some of their stories in later posts. If you have a story to share, the survey is at: https://www.surveymonkey.com/s/P335DD3 About the Author: Debra Lilley, Fujitsu Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director. Debra has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Leader and then Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts Editor’s Note: Debra has kindly agreed to share her musings and experience in a monthly column on the Fusion Middleware blog so do stay tuned…

    Read the article

  • Battery life starts at 2:30 hrs (99%), but less than 1 minute later is only 1:30 hrs (99%)

    - by zondu
    After searching this and other forums, I haven't seen this same issue listed anywhere for Ubuntu 12. Prior to installing Ubuntu 12.10, my Netbook (Acer AspireOne D250, SATA HDD) was consistently getting 2:30-3 hrs battery life under Windows XP Home, SP3. However, immediately after installing Ubuntu 12.10, the battery life starts out at 2:30 hrs (99%), but less than 1 minute later suddenly drops to 1:30 hrs (99%), which seems very odd. It could be a complete coincidence that the battery is suddenly flaky at the exact same moment that Ubuntu 12.10 was installed, but that doesn't seem likely. I'm a newbie to Ubuntu, so I don't have much experience tweaking/trouble-shooting yet. Here's what I've tried so far: enabled laptop mode (sudo su, then echo 5 /proc/sys/vm/laptop_mode) and checked that it is running when the A/C adapter is unplugged, but it doesn't seem to have made any noticeable difference in battery life, installed Jupiter, but it didn't work and messed up the system, so I had to uninstall it, disabled bluetooth (wifi is still on b/c it is necessary), set the screen to lowest brightness, etc., run through at least 1 full power cycle (running until the netbook shut itself off due to critical battery) and have been using it normally (sometimes plugged in, often unplugged until the battery gets very low) for a week since installing Ubuntu 12.10. installed powertop, but have no idea how to interpret its results. Here are the results of acpi -b: w/ A/C adapter: Battery 0: Full, 100% immediately after unplugging: Battery 0: Discharging, 99%, 02:30:20 remaining 1 minute after unplugging: Battery 0: Discharging, 99%, 01:37:49 remaining 2-3 minutes after unplugging: Battery 0: Discharging, 95%, 01:33:01 remaining 10 minutes after unplugging: Battery 0: Discharging, 85%, 01:13:38 remaining Results of cat /sys/class/power_supply/BAT0/uevent: w/ A/C adapter: POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Full POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=12136000 POWER_SUPPLY_CURRENT_NOW=773000 POWER_SUPPLY_CHARGE_FULL_DESIGN=4500000 POWER_SUPPLY_CHARGE_FULL=1956000 POWER_SUPPLY_CHARGE_NOW=1956000 POWER_SUPPLY_MODEL_NAME=UM08B32 POWER_SUPPLY_MANUFACTURER=SANYO POWER_SUPPLY_SERIAL_NUMBER= immediately after unplugging: POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Discharging POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=11886000 POWER_SUPPLY_CURRENT_NOW=773000 POWER_SUPPLY_CHARGE_FULL_DESIGN=4500000 POWER_SUPPLY_CHARGE_FULL=1956000 POWER_SUPPLY_CHARGE_NOW=1937000 POWER_SUPPLY_MODEL_NAME=UM08B32 POWER_SUPPLY_MANUFACTURER=SANYO POWER_SUPPLY_SERIAL_NUMBER= 1 minute later: POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Discharging POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=11728000 POWER_SUPPLY_CURRENT_NOW=1174000 POWER_SUPPLY_CHARGE_FULL_DESIGN=4500000 POWER_SUPPLY_CHARGE_FULL=1956000 POWER_SUPPLY_CHARGE_NOW=1937000 POWER_SUPPLY_MODEL_NAME=UM08B32 POWER_SUPPLY_MANUFACTURER=SANYO POWER_SUPPLY_SERIAL_NUMBER= 2-3 minutes later: POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Discharging POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=11583000 POWER_SUPPLY_CURRENT_NOW=1209000 POWER_SUPPLY_CHARGE_FULL_DESIGN=4500000 POWER_SUPPLY_CHARGE_FULL=1956000 POWER_SUPPLY_CHARGE_NOW=1878000 POWER_SUPPLY_MODEL_NAME=UM08B32 POWER_SUPPLY_MANUFACTURER=SANYO POWER_SUPPLY_SERIAL_NUMBER= 10 minutes later: POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Discharging POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=11230000 POWER_SUPPLY_CURRENT_NOW=1239000 POWER_SUPPLY_CHARGE_FULL_DESIGN=4500000 POWER_SUPPLY_CHARGE_FULL=1956000 POWER_SUPPLY_CHARGE_NOW=1644000 POWER_SUPPLY_MODEL_NAME=UM08B32 POWER_SUPPLY_MANUFACTURER=SANYO POWER_SUPPLY_SERIAL_NUMBER= Results of upower -i /org/freedesktop/UPower/devices/battery_BAT0: w/ A/C adapter: native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/device:02/PNP0C0A:00/power_supply/BAT0 vendor: SANYO model: UM08B32 power supply: yes updated: Tue Nov 27 15:24:58 2012 (823 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: fully-charged energy: 21.1248 Wh energy-empty: 0 Wh energy-full: 21.1248 Wh energy-full-design: 48.6 Wh energy-rate: 8.3484 W voltage: 12.173 V percentage: 100% capacity: 43.4667% technology: lithium-ion immediately after unplugging: native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/device:02/PNP0C0A:00/power_supply/BAT0 vendor: SANYO model: UM08B32 power supply: yes updated: Tue Nov 27 15:41:25 2012 (1 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: discharging energy: 20.9196 Wh energy-empty: 0 Wh energy-full: 21.1248 Wh energy-full-design: 48.6 Wh energy-rate: 8.3484 W voltage: 11.86 V time to empty: 2.5 hours percentage: 99.0286% capacity: 43.4667% technology: lithium-ion History (charge): 1354023683 99.029 discharging 1 minute later: native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/device:02/PNP0C0A:00/power_supply/BAT0 vendor: SANYO model: UM08B32 power supply: yes updated: Tue Nov 27 15:42:31 2012 (17 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: discharging energy: 20.9196 Wh energy-empty: 0 Wh energy-full: 21.1248 Wh energy-full-design: 48.6 Wh energy-rate: 13.5432 W voltage: 11.753 V time to empty: 1.5 hours percentage: 99.0286% capacity: 43.4667% technology: lithium-ion History (charge): 1354023683 99.029 discharging History (rate): 1354023751 13.543 discharging 2-3 minutes later: native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/device:02/PNP0C0A:00/power_supply/BAT0 vendor: SANYO model: UM08B32 power supply: yes updated: Tue Nov 27 15:45:06 2012 (20 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: discharging energy: 20.2824 Wh energy-empty: 0 Wh energy-full: 21.1248 Wh energy-full-design: 48.6 Wh energy-rate: 13.7484 W voltage: 11.545 V time to empty: 1.5 hours percentage: 96.0123% capacity: 43.4667% technology: lithium-ion History (charge): 1354023906 96.012 discharging 1354023844 97.035 discharging History (rate): 1354023906 13.748 discharging 1354023875 12.992 discharging 1354023844 13.284 discharging 10 minutes later: native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/device:02/PNP0C0A:00/power_supply/BAT0 vendor: SANYO model: UM08B32 power supply: yes updated: Tue Nov 27 15:54:24 2012 (28 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: discharging energy: 18.1764 Wh energy-empty: 0 Wh energy-full: 21.1248 Wh energy-full-design: 48.6 Wh energy-rate: 13.2948 W voltage: 11.268 V time to empty: 1.4 hours percentage: 86.0429% capacity: 43.4667% technology: lithium-ion History (charge): 1354024433 86.043 discharging History (rate): 1354024464 13.295 discharging 1354024433 13.662 discharging 1354024402 13.781 discharging I noticed that between #2 and #3 (0 and 1 minutes after unplugging), while the battery still reports 99% charge and drops from 2:30 hr to 1:30 hr, the energy usage goes from 8.34 W to 13.54 W and the current_now increases, but shouldn't it be using less energy in battery mode since the screen is much dimmer and it's in power saving mode? (or is that normal behavior?) It also seems to drain more quickly than what it predicts, especially with the 1-1.25 hour drop in the first minute of being unplugged, which seems odd. What really concerns me is that Ubuntu 12.10 may not be properly managing the battery (with the sudden change in charge/life from 2:30 to 1:30 or 1:15 within a minute of unplugging), and that a new battery may quickly die under Ubuntu 12.10. I'd greatly appreciate any advice/suggestions on what to do, and especially whether there's a way to get back the 1-1.5 hrs of battery life that were suddenly lost when changing from WinXp to Ubuntu 12.10. Thanks :)

    Read the article

  • Git on DreamHost still balking on big files even after I compiled with NO_MMAP=1

    - by fuzzy lollipop
    I compiled Git 1.7.0.3 on DreamHost with the NO_MMAP=1 option, I also supplied that option when I did the "make NO_MMAP=1 install". I have my paths set up correctly, which git reports my ~/bin dir which is correct, git --version returns the correct version. But when I try to do a "git push origin master" with "big" files ~150MB it always fails. Does anyone have an suggestions on how to get DreamHost to accept this "big" files from a git push?

    Read the article

  • Mercurial set per user rights

    - by Kami
    I would like to set better user access to my mercurial repos trough the cgi web interface. This is my current hgweb.config : [web] contact = first.lastname description = HG Repos allow_push=user1,user2,user3 allow_read=user1,user2,user3 [paths] repo01 = /home/mercurial/repo01 repo02 = /home/mercurial/repo02 repo03 = /home/mercurial/repo03 repo04 = /home/mercurial/repo04 How to setup the following ? : user1 has only access (push/read) to repo01 and repo02 user2 has only access (read only) to repo01 and repo02 user3 has only access (read) to repo01 and repo02, (push/read) repo03 I've checked multiple mercurial config tutorials but nothing helped me so far.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >