Search Results

Search found 30046 results on 1202 pages for 'document load'.

Page 328/1202 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • Getting xsession-errors after Unity Lens install

    - by David
    I apologize in advance if I am leaving something out here. Please let me know what additional info is required, and i will be happy to post it. Can you tell me what these error messages are, and how I can go about resolving them? WARN 2012-02-02 14:02:56 unity.glib.dbusproxy GLibDBusProxy.cpp:255 Cannot call method InfoRequest proxy /net/launchpad/lens/utilities does not exist WARN 2012-02-02 14:02:56 unity.glib.dbusproxy GLibDBusProxy.cpp:255 Cannot call method SetActive proxy /net/launchpad/lens/utilities does not exist WARN 2012-02-02 14:02:56 unity.iconloader IconLoader.cpp:509 Unable to load contents of file:///usr/share/icons/unity-icon-theme/places/svg/category-installed.svg: Error opening file: No such file or directory WARN 2012-02-02 14:02:56 unity.iconloader IconLoader.cpp:509 Unable to load contents of file:///usr/share/icons/unity-icon-theme/places/svg/category-available.svg: Error opening file: No such file or directory WARN 2012-02-02 14:02:56 unity.glib.dbusproxy GLibDBusProxy.cpp:255 Cannot call method InfoRequest proxy /net/launchpad/lens/askubuntu does not exist WARN 2012-02-02 14:02:56 unity.glib.dbusproxy GLibDBusProxy.cpp:255 Cannot call method SetActive proxy /net/launchpad/lens/askubuntu does not exist I am also getting Nautilus errors logged here. I do not remember what lenses I installed, or from where (Software Center, manual install, etc).

    Read the article

  • Scalable Architecture for modern Web Development [on hold]

    - by Jhilke Dai
    I am doing research about Scalable architecture for Web Development, the research is solely to support Modern Web Development with flexible architecture which can scale up/down according to the needs without losing any core functionality. By Modern Web I mean to support all the Devices used to access websites, but the loading mechanism for all devices would be different. My quest of architecture is: For PC: Accessing web in PC is faster but it also depends on the Geo-location, so, the application would check by default the capacity of Internet/Browser and load the page according to it. For Mobile: Most of the mobile design these days either hide information or use different version of same application. eg: facebook uses m.facebook.com which is completely different than PC version. Hiding the things from Mobile using JavaScript or CSS is not a solution as it'll consume the bandwidth and make the application slow. So, my architecture research is about Serving one Application, which has different stack. When the application receives the request it'd send the Packaged Stack to the received request. This way the load time for end users would be faster and maintenance of application for developers would be easier. I am researching about for 4-tier(layered) architecture like: Presentation Layer Application Logic Layer -- The main Logic layer which stores the Presentation Stack Business Logic Layer Data Layer Main Question: Have you come across of similar architecture? If so, then can you list the links here, I'm very much interested to learn about those implementations specially in real world scenario. Have you thought about similar architectures and tried your own ideas, or if you have any ideas regarding this, then I urge to share. I am open to any discussions regarding this, so, please feel free to comment/answer.

    Read the article

  • Mac internet problems

    - by Bradley Herman
    Our office is set up with mostly macs (7 of them) but we do have a windows laptop and a windows desktop on the network as well. The network is configured with a modem going into a switch/router throughout the office to the computers, along with a wireless router. Everything runs fine most of the time, but periodically while using the web, certain sites will stop loading and timeout repeatedly. This usually lasts 20 minutes or so and can be incredibly annoying. Resetting the modem/router and/or rebooting the computer never helps. The weirdest part is that in almost every case, the websites are fine on our Windows machines. I frequently use github, google, Stack Overflow, and jQuery reference and I can count on the sites being unavailable to me at least once a day. While I can't get them to load, I can spin my chair around to the windows server behind me and load the sites just fine. Any idea what the hell could be going on here?

    Read the article

  • SharePoint 2010 Single Page Apps without a Master Page

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2014/06/06/156827.aspxWell, maybe a stretch, but I am inclined to believe it is so.  I have been using  the JavaScript Client Object Model (JCSOM) for about 6 months now and I believe it can do about 80% of my job quickly without much fanfare.  When building sites in SharePoint no one wants the OOTB list views, etc. They want a custom look and feel!  I used to think in previous engagements that this would mean some custom server code or at least a data-form web part.   Since coming on-board in my current engagement, I have been forced because we don’t own the hosting site to come up with innovative ways to customize the UI of SharePoint.  We can push content via sandbox solutions and use JCSOM from within SharePoint Designer to do almost all customizations.  I have been using the following methodology to accomplish this: 1. Create an HTML file, which links CSS and JavaScript Files 2. Create and ASPX Web Part Page, Include a Content Editor Web Part and link to the HTML page created above.   So basically once it was done, I could copy , paste,  and rename those 4 items: CC, JS, HTML. ASPX and using MVVM just change the Model, View, and View-Model in the JavaScript file.  in about 5 minutes, I could create a completely new web part with SharePoint data.  Styling would take a little longer.  Some issues that would crop up: 1.  Multiple(s) of these web parts would not work well together on the same page (context). 2.  To separate the Web parts and context I would create a separate page for each web part and link them to a tabs layout via a Page Viewer web part or I frame.  Easy to do and not a problem but a big load problem as these web part pages even with minimal master had huge footprints.  (master page and page web part zones)   I kept thinking of my experience with SharePoint 2013 and apps!  The JavaScript was loaded from within the app, why can’t we do that in 2010 and skip the master page and web part zones. I thought at first, just link to sp.js but that didn’t work so I searched the web and found a link which did not work at all in my environment but helped me create a solution that would kudos to (Will). <!DOCTYPE html> <%@ Page %> <%@ Register Tagprefix="SharePoint" Namespace="Microsoft.SharePoint.WebControls" Assembly="Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Import Namespace="Microsoft.SharePoint" %> <html> <head> <link rel="Stylesheet" type="text/css" href="../CSS/smoothness/jquery-ui-1.10.4.custom.min.css"> </head> <body > <form runat="server"> <!-- the following 5 js files are required to use CSOM --> <script src="/_layouts/1033/init.js" type="text/javascript" ></script> <script src="/_layouts/MicrosoftAjax.js" type="text/javascript" ></script> <script src="/_layouts/sp.core.js" type="text/javascript" ></script> <script src="/_layouts/sp.runtime.js" type="text/javascript" ></script> <script src="/_layouts/sp.js" type="text/javascript" ></script> <!-- include your app code --> <script src="../scripts/jquery-1.9.1.js" type="text/javascript" ></script> <script src="../Scripts/jquery-ui-1.10.3.custom.min.js" type="text/javascript"></script> <script src="../scripts/App.js" type="text/javascript" ></script> <div ID="Wrapper"> </div> <SharePoint:FormDigest ID="FormDigest1" runat="server"></SharePoint:FormDigest> </form> </body> </html> Notice that I have the scripts loaded within the body! I discovered this by accident in trying to get Will’s solution to work, it made this work just like normal JCSOM from the master page.  I am sure there are other ways to do this, but I am a full time developer, so I’ll let someone else investigate the alternatives.  I have an example page showing an Announcements list as a Booklet which is a JQuery Plug-In.  Here is the page source notice the footprint is light.   <!DOCTYPE html> <html> <head> <link rel="Stylesheet" type="text/css" href="../CSS/smoothness/jquery-ui-1.10.4.custom.min.css"> <link href="../CSS/jquery.booklet.latest.css" type="text/css" rel="stylesheet" media="screen, projection, tv" /> <link href="../CSS/bookletannouncement.css" type="text/css" rel="stylesheet" media="screen, projection, tv" /> </head> <body > <form name="ctl00" method="post" action="BookletAnnouncements2.aspx" onsubmit="javascript:return WebForm_OnSubmit();" id="ctl00"> <div> <input type="hidden" name="__REQUESTDIGEST" id="__REQUESTDIGEST" value="0x3384922A8349572E3D76DC68A3F7A0984CEC14CB9669817CCA584681B54417F7FDD579F940335DCEC95CFFAC78ADDD60420F7AA82F60A8BC1BB4B9B9A57F9309,06 Jun 2014 14:13:27 -0000" /> <input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="/wEPDwUBMGRk20t+bh/NWY1sZwphwb24pIxjUbo=" /> </div> <script type="text/javascript"> //<![CDATA[ var g_presenceEnabled = true;var _fV4UI=true;var _spPageContextInfo = {webServerRelativeUrl: "\u002fsites\u002fDemo50\u002fTeamSite", webLanguage: 1033, currentLanguage: 1033, webUIVersion:4,pageListId:"{ee707b5f-e246-4246-9e55-8db11d09a8cb}",pageItemId:167,userId:1, alertsEnabled:false, siteServerRelativeUrl: "\u002fsites\u002fdemo50", allowSilverlightPrompt:'True'};//]]> </script> <script type="text/javascript" src="/_layouts/1033/init.js?rev=lEi61hsCxcBAfvfQNZA%2FsQ%3D%3D"></script> <script type="text/javascript"> //<![CDATA[ function WebForm_OnSubmit() { UpdateFormDigest('\u002fsites\u002fDemo50\u002fTeamSite', 1440000); return true; } //]]> </script> <!-- the following 5 js files are required to use CSOM --> <script src="/_layouts/1033/init.js"></script> <script src="/_layouts/MicrosoftAjax.js"></script> <script src="/_layouts/sp.core.js"></script> <script src="/_layouts/sp.runtime.js"></script> <script src="/_layouts/sp.js"></script> <!-- include your app code --> <script src="../scripts/jquery-1.9.1.js"></script> <script src="../Scripts/jquery-ui-1.10.3.custom.min.js" type="text/javascript"></script> <script src="../Scripts/jquery.easing.1.3.js"></script> <script src="../Scripts/jquery.booklet.latest.min.js"></script> <script src="../scripts/Announcementsbooklet.js"></script> <div ID="Accord"> </div> <script type="text/javascript"> //<![CDATA[ var _spFormDigestRefreshInterval = 1440000;//]]> </script> </form> </body> </html> Here is the source to make the booklet work: ExecuteOrDelayUntilScriptLoaded(retrieveListItems, "sp.js"); var context; var collListItem; var web; var listRootFolder; var oList; //retieve the list items from the host web function retrieveListItems() { context = SP.ClientContext.get_current(); web = context.get_web(); oList = context.get_web().get_lists().getByTitle('Announcements'); var camlQuery = new SP.CamlQuery(); camlQuery.set_viewXml('<View><RowLimit>10</RowLimit></View>'); collListItem = oList.getItems(camlQuery); listRootFolder = oList.get_rootFolder(); context.load(listRootFolder); context.load(web); context.load(collListItem); context.executeQueryAsync(onQuerySucceeded, onQueryFailed); } //Model object var Dev = function (id, title, body, expires, url) { var self = this; self.ID = id; self.Title = title; self.Body = body; self.Expires = expires; self.Url = url; } //View model var DevVM = new ListViewModel() function ListViewModel() { var self = this; self.items = new Array(); } function onQuerySucceeded(sender, args) { var listItemEnumerator = collListItem.getEnumerator(); while (listItemEnumerator.moveNext()) { var oListItem = listItemEnumerator.get_current(); var javaDate = oListItem.get_item('Expires'); var fmtExpires = javaDate.format('dd MMM yyyy'); var url = ""; var goodUrl = oListItem.get_item('Url'); if (goodUrl == null) { url = web.get_serverRelativeUrl() + "/Lists/Announcements/EditForm.aspx?ID=" + oListItem.get_item('ID'); } else { url = web.get_serverRelativeUrl() + oListItem.get_item('Url') } DevVM.items.push(new Dev(oListItem.get_item('ID'), oListItem.get_item('Title'), oListItem.get_item('Body'), fmtExpires, url)); } $.each(DevVM.items, function (index) { $("#Accord").append(createAccordNode(DevVM.items[index].Title, DevVM.items[index].Body, " Expires: " + DevVM.items[index].Expires, DevVM.items[index].Url)); }); $("#Accord").booklet(); } function createAccordNode(title, body, expires, url) { return ( $("<div><h3>" + title + "</h3><p><span class='titlespan'><a href='" + url + "'>" + title + "</a></span><span class='dicussionspan'>" + body + "</span><span class='expiresspan'>" + expires + "</span></p></div>") ); } function onQueryFailed(sender, args) { alert('Request failed. ' + args.get_message() + '\n' + args.get_stackTrace()); } The idea behind this post is that this could be used to: 1.   Create landing pages that are very un-SharePoint like! 2.   Make lightweight pages that could be used in page viewer web part or I Frame. 3.  Utilize Deep Zoom Composer and Sea-Dragon/or Silver light I will be using this for much of my development work!

    Read the article

  • HAProxy NGInx SSL setup

    - by Niclas
    I've been looking around different setups for a server cluster supporting SSL and I would like to benchmark my idea with you. Requirements: All servers in the cluster should be under the same full domain name. (http and https) Routing to subsystems is done on URI matching in HA proxy. All URIs have support for SSL support. Wish: Centralizing routing rules ---<----http-----<-- | | Inet -->HA--+---https--->NGInx_SSL_1..N | | +---http---> Apache_1..M | +---http---> NodeJS Idea: Configure HA to route all SSL traffic (mode=tcp,algorithm=Source) to an NGInx cluster turning https traffic into http. Re-pass the http traffic from NGInx to the HA for normal load-balancing which performs load balancing based on HA config. My question is simply: Is this the best way to to configure based on requirements above?

    Read the article

  • using a wiki for requirements

    - by apollodude217
    Hi, I'm looking into ways of improving requirements management. Currently, we have a Word document published on a Web site. Unfortunately, we cannot (to my knowledge) look at changes from one revision to the next. I would greatly prefer to be able to do so, much like with a wiki or VCS (or both, like the wiki's on bitbucket!). Also, each document describes changes devs are expected to meet by a given deadline. There is no collection of accumulated app features documented anywhere, so it's sometimes hard to distinguish between a bug and a (poorly-designed) feature when trying to make quick fixes to legacy apps. So I had an idea I wanted to get feedback on. What about: Using a wiki so that we can track who changed what when (mostly to even see if any edits were made since the last time one looked). Having one, say, wiki page per product rather than one per deadline, keeping up with all features of the product rather than the changes that should be implemented. This way, I can look at a particular revision of the page to see what the app should do at a given point in time, and I can look at changes to the page since the last release for the requirements to be implemented by the next deadline. Waddayathink?

    Read the article

  • SpeedTracer NETWORK_RESOURCE_RESPONSE vs NETWORK_RESOURCE_FINISH

    - by Ben Flynn
    I'm using SpeedTracer with GoogleChrome to measure the load times of requested resources. The SpeedTracer site says: NETWORK_RESOURCE_RESPONSE "Indicates that the renderer has started receiving bits from the resource loader" NETWORK_RESOURCE_FINISH "Indictes a resource load is successful and complete." In my mind that means we would always see a network resource response (bytes are arriving) before we see a finish (all bytes received). This doesn't seem to be the case at all. Here is a sample: Request Timing @33519ms for 926ms Response Timing @34445ms for -847ms Total Timing @33519ms for 78ms I'm guessing response time isn't supposed to be negative. Can someone explain this or is this a bug? I'm using Chrome 10.0.612.3 dev with a SpeedTracer I downloaded today.

    Read the article

  • Update Since Microsoft/PSC Office Open XML Case Study

    - by Tim Murphy
    In 2009 Microsoft released a case study about a project that we had done using the OOXML SDK 1.0 for Research Directors Inc.  Since that time Microsoft has released version 2.0 of the SDK and PSC has done significant development with it.  Below are some of the mile stones we have reached since the original case study. At the time of the original case study two report types had been automated to output as PowerPoint presentations.  Now that the all the main products have been delivered we have added three reports with Word document outputs and five more reports with PowerPoint outputs. One improvement we made over the original application was to create a PowerPoint Add-In which allows the users to tag a slide.  These tags along with the strongly typed SDK 2.0 allows for the code to use LINQ to easily search for slides in the template files.  This allows for a more flexible architecture base on assembling a presentation from copied slide extracted from the template. The new library we created also enabled us to create two new Word based reports in two weeks.  The library we created abstracts the generation of the documents from the business logic and the data retrieval.  The key to this is the mark up.  Content Controls are a good method for identifying sections of a template to be modified or replaced.  Join this with the concept of all data being generically either scalar or two dimensional and the code becomes more generic. In the end we found the OOXML SDK 2.0 to be a great tool for accelerating document generation development and creating happy clients.  del.icio.us Tags: PSC Group,OOXML,Case Study,Office Open XML,Word,PowerPoint

    Read the article

  • Alienware M17x R3: Possible downclock

    - by Ywen
    I installed recently Kubuntu 11.10 32 bits (had graphics driver issues, wanted to try on 32 bits version) on my new Alienware M17x, with a Core i7-2670QM CPU. Cores are supposed to be clocked at 2.2 GHz, however the output of $ cat /proc/cpuinfo | grep -i "hz" gives me: model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 If useful, the AC adapter is plugged in (yet the ouput is the same when the computer is powered only by the battery) and I have Firefox and Eclipse running. Does /proc/cpuinfo reflect a possible automatic downclock made to save power if processor load is low or is this output abnormal? EDIT: Ok, I checked and yes, the ouput does vary in function of the load. I reach 2.2 GHz when needed. But my following problem remains. I was checking my CPU clocking because I experienced poor performances when reading 720p video files on Ubuntu with VLC or mplayer when on battery (and I believe VLC by default only uses CPU, not GPU to decode), whereas I haven't got such problems with VLC on Windows (which made me think it wasn't coming from a BIOS option, plus every option in the BIOS regarding the CPU is turned ON).

    Read the article

  • PHP file_put_contents File Locking

    - by hozza
    The Senario: You have a file with a string (average sentence worth) on each line. For arguments sake lets say this file is 1Mb in size (thousands of lines). You have a script that reads the file, changes some of the strings within the document (not just appending but also removing and modifying some lines) and then overwrites all the data with the new data. The Questions: Does 'the server' PHP, OS or httpd etc. already have systems in place to stop issues like this (reading/writing half way through a write)? i. If it does, please explain how it works and give examples or links to relevant documentation. ii. If not, are there things I can enable or set-up, such as locking a file until a write is completed and making all other reads and/or writes fail until the previous script has finished writing? My Assumptions and Other Information: The server in question is running PHP and Apache or Lighttpd. If the script is called by one user and is halfway through writing to the file and another user reads the file at that exact moment. The user who reads it will not get the full document, as it hasn't been written yet. (If this assumption is wrong please correct me) I'm only concerned with PHP writing and reading to a text file, and in particular, the functions "fopen"/"fwrite" and mainly "file_put_contents". I have looked at the "file_put_contents" documentation but have not found the level of detail or a good explanation of what the "LOCK_EX" flag is or does. The senario is an EXAMPLE of a worst case senario where I would assume these issues are more likely to occur, due to the large size of the file and the way the data is edited. I want to learn more about these issues and don't want or need answers or comments such as "use mysql" or "why are you doing that" because I'm not doing that, I just want to learn about file read/writing with PHP and don't seem to be looking in the right places/documentation and yes I understand PHP is not the perfect language for working with files in this way...

    Read the article

  • Last click counts link cookies

    - by user3636031
    I want to fix so my only the last click gets the cookie, here is my script: <script type="text/javascript"> document.write('<scr' + 'ipt type="text/javascript" src="' + document.location.protocol + '//sc.tradetracker.net/public/tradetracker/tracking/?e=dedupe&amp;t=js"></scr' + 'ipt>'); </script> <script type="text/javascript"> // The pixels. var _oPixels = { tradetracker: '<img id="tt" />', tradedoubler: '<img id="td" />', zanox: '<img id="zx" />', awin: '<img id="aw" />' }; // Run the dedupe. _ttDedupe( 'conversion', 'network' ); </script> <noscript> <img id="tt" /> <img id="td" /> <img id="zx" /> <img id="aw" /> </noscript> How can I get this right? Thanks!

    Read the article

  • can I force server to always use turboboost?

    - by javapowered
    I'm using HP DL360p Gen8 with 2 * Xeon E5-2640. I do not load CPU 100%, i load it only ~10% and so I guess turboboost is not activated. However I'm using my server for trading so I absolutely don't care about CPU loading but I always want to process my data asap. So I want server to operate using maximum 3 GHz. I.e. 90% of CPU time I don't have anything to process. 10% of CPU time I have data to process. But I need to process it ASAP. I need every single microsecond. So I want server to operate always at maximum "turboboosted" mode. Is it possible?

    Read the article

  • pdating modules on VPS [closed]

    - by tertle
    Been trying to install openVPN on a VPS but come into a few problems when trying to start the openvpn server. Service deferred error: IPTablesServiceBase: failed to run iptables-restore [status=1]: ['FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'iptables-restore: line 46 failed']: internet/base:1175,internet/base:752,internet/process:45,internet/process:306,internet/_baseprocess:48,internet/process:775,internet/_baseprocess:60,svc/pp:116,svc/svcnotify:26,internet/defer:238,internet/defer:307,internet/defer:323,sagent/ipts:105,sagent/ipts:39,util/error:52,util/error:32 service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['iptables_openvpn']) Anyway so after a bit of playing around and some advice, I found that the linux kernal and modules don't match on my server. uname -r returns: 2.6.18-028stab070.14 and ls /lib/modules returns: 2.6.18-028stab070.7 The server is running OpenVZ and my container uses ubuntu 9.10. So my question is, is it possible for me to update my modules on a VPS and if so how would I do this, or is this something I'll need to try get my host to do? Thanks in advance.

    Read the article

  • Amazon EC2, fastest way to get a node into an existing cluster

    - by imaginative
    I'm new to Amazon AWS. A lot of the time I hear about people folks spawning instances and almost instantly putting them behind a load balancer and into an existing cluster. In the traditional world of managed machines, this would include provisioning hardware, installing an OS, configuring the network on the machine and once the network is available, use a tool of your choice such as CFengine, Puppet or Chef to bootstrap the machine based on its class. It seems like there are "shortcuts" that are able to get a server of a particular class up and running in Amazon EC2. If I have a particular stack running on my server, such as erlang, tomcat6 etc.. what's the fastest way to get these up and running and hooked into Amazon's load balancer? From network, to software stack to kernel tuning? Is it a combination of creating an AMI then running a tool like Puppet against the new instance? Any idea

    Read the article

  • Setting jQuery after ASP.net AJAX partial post back

    - by Steve Clements
    OK, so for some reason you have a mega mashup solution with ASP.net AJAX, jQuery and web forms.  Perhaps you are just on the migration from AjaxControlToolkit to the jQuery UI framework – who knows!! Anyway, the problem is that when you post back with something like an UpdatePanel, you will find that your nicely setup jQuery stuff, like the datepicker for example will no longer work. You may have something like this… $(document).ready(function () {     $(".date-edit").datepicker({ dateFormat: "dd/mm/yy", firstDay: 1, showOtherMonths: true, selectOtherMonths: true }); });   When you’re ASP.net UpdatePanel post back, you will find that your datepicker has gone.  Bugger! Well you need to add this little gem to set it back up again once the UpdatePanel comes back to the page. var prm = Sys.WebForms.PageRequestManager.getInstance(); prm.add_endRequest(function () {     $(".date-edit").datepicker({ dateFormat: "dd/mm/yy", firstDay: 1, showOtherMonths: true, selectOtherMonths: true }); });   Or like me, you would have a javascript function, something like InitPage(); do all your work in there and call that on document.ready and endRequest. Your choice…you have the power   Share this post :

    Read the article

  • Temp profile used when User logs in

    - by TJ
    One of my users logged into his computer, Windows XP, last night only to be meet with an error message that it could not load his profile and he will be logged in using a temporary profile. Typically when this happens I shut the machine down and restart and the correct profile will load when they log in again. Not this time. In the user profile options under computer-properties-Advanced-user profiles it show that there are three profiles with his name. Two are the exact same size with the same modified date (5/5/10) and the other is what I would expect size wise for a new profile with a modified date of today. What are my options to restore his profile?

    Read the article

  • tcpsndbuf high fail count

    - by Matthew Crenshaw
    I've got a small setup, one machine that acts as a load balancer and two machines that do all the work. The load balancer runs nginx (static content + php proxying to workers) and mysql, the two workers run php5-fpm and memcached (pooled between workers). Here's beancounters for the balancer: tcpsndbuf 2171848 2386280 10000000 20000000 3947733 tcprcvbuf 1248288 1669504 10000000 20000000 0 Here's worker 1: tcpsndbuf 951976 1262672 20000000 40000000 0 tcprcvbuf 278528 393496 20000000 40000000 0 Here's worker 2: tcpsndbuf 989888 527472 20000000 40000000 0 tcprcvbuf 212992 452520 20000000 40000000 0 The balancer has 1GB ram, the two workers have 2GB ram each. What is eating my send buffer?

    Read the article

  • GF 3.0.1 + Virtual Server: www.myhost.com:8080/projectname-war => www.myhost.com. How?

    - by Harry Pham
    I need to change www.myhost.com:8080/myproject-war to www.myhost.com. This is what I try. In admin console in Glassfish 3.0.1, I create a http-listener-3 with port 9090 and address 0.0.0.0 (I want to have port 80, but got access-denied or some sort). I then create a virtual server call scholar, with Id=scholar, Hosts=${com.sun.aas.hostName}, Network Listeners: http-listener-3, Default Web Module=project#project-war.war(it is the only option from the drop down list since I only deploy 1 app). Then under Applications, I set the virtual server of the application to be scholar. Save and restart. I try localhost:9090, and expecting that it will load my project like as if I type localhost:9090/project-war. But it does not. I already set the Default Web Module, to be the project, why doesnt it load the project by default?

    Read the article

  • Working with Timelines with LINQ to Twitter

    - by Joe Mayo
    When first working with the Twitter API, I thought that using SinceID would be an effective way to page through timelines. In practice it doesn’t work well for various reasons. To explain why, Twitter published an excellent document that is a must-read for anyone working with timelines: Twitter Documentation: Working with Timelines This post shows how to implement the recommended strategies in that document by using LINQ to Twitter. You should read the document in it’s entirety before moving on because my explanation will start at the bottom and work back up to the top in relation to the Twitter document. What follows is an explanation of SinceID, MaxID, and how they come together to help you efficiently work with Twitter timelines. The Role of SinceID Specifying SinceID says to Twitter, “Don’t return tweets earlier than this”. What you want to do is store this value after every timeline query set so that it can be reused on the next set of queries.  The next section will explain what I mean by query set, but a quick explanation is that it’s a loop that gets all new tweets. The SinceID is a backstop to avoid retrieving tweets that you already have. Here’s some initialization code that includes a variable named sinceID that will be used to populate the SinceID property in subsequent queries: // last tweet processed on previous query set ulong sinceID = 210024053698867204; ulong maxID; const int Count = 10; var statusList = new List<status>(); Here, I’ve hard-coded the sinceID variable, but this is where you would initialize sinceID from whatever storage you choose (i.e. a database). The first time you ever run this code, you won’t have a value from a previous query set. Initially setting it to 0 might sound like a good idea, but what if you’re querying a timeline with lots of tweets? Because of the number of tweets and rate limits, your query set might take a very long time to run. A caveat might be that Twitter won’t return an entire timeline back to Tweet #0, but rather only go back a certain period of time, the limits of which are documented for individual Twitter timeline API resources. So, to initialize SinceID at too low of a number can result in a lot of initial tweets, yet there is a limit to how far you can go back. What you’re trying to accomplish in your application should guide you in how to initially set SinceID. I have more to say about SinceID later in this post. The other variables initialized above include the declaration for MaxID, Count, and statusList. The statusList variable is a holder for all the timeline tweets collected during this query set. You can set Count to any value you want as the largest number of tweets to retrieve, as defined by individual Twitter timeline API resources. To effectively page results, you’ll use the maxID variable to set the MaxID property in queries, which I’ll discuss next. Initializing MaxID On your first query of a query set, MaxID will be whatever the most recent tweet is that you get back. Further, you don’t know what MaxID is until after the initial query. The technique used in this post is to do an initial query and then use the results to figure out what the next MaxID will be.  Here’s the code for the initial query: var userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.SinceID == sinceID && tweet.Count == Count select tweet) .ToList(); statusList.AddRange(userStatusResponse); // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; The query above sets both SinceID and Count properties. As explained earlier, Count is the largest number of tweets to return, but the number can be less. A couple reasons why the number of tweets that are returned could be less than Count include the fact that the user, specified by ScreenName, might not have tweeted Count times yet or might not have tweeted at least Count times within the maximum number of tweets that can be returned by the Twitter timeline API resource. Another reason could be because there aren’t Count tweets between now and the tweet ID specified by sinceID. Setting SinceID constrains the results to only those tweets that occurred after the specified Tweet ID, assigned via the sinceID variable in the query above. The statusList is an accumulator of all tweets receive during this query set. To simplify the code, I left out some logic to check whether there were no tweets returned. If  the query above doesn’t return any tweets, you’ll receive an exception when trying to perform operations on an empty list. Yeah, I cheated again. Besides querying initial tweets, what’s important about this code is the final line that sets maxID. It retrieves the lowest numbered status ID in the results. Since the lowest numbered status ID is for a tweet we already have, the code decrements the result by one to keep from asking for that tweet again. Remember, SinceID is not inclusive, but MaxID is. The maxID variable is now set to the highest possible tweet ID that can be returned in the next query. The next section explains how to use MaxID to help get the remaining tweets in the query set. Retrieving Remaining Tweets Earlier in this post, I defined a term that I called a query set. Essentially, this is a group of requests to Twitter that you perform to get all new tweets. A single query might not be enough to get all new tweets, so you’ll have to start at the top of the list that Twitter returns and keep making requests until you have all new tweets. The previous section showed the first query of the query set. The code below is a loop that completes the query set: do { // now add sinceID and maxID userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.Count == Count && tweet.SinceID == sinceID && tweet.MaxID == maxID select tweet) .ToList(); if (userStatusResponse.Count > 0) { // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; statusList.AddRange(userStatusResponse); } } while (userStatusResponse.Count != 0 && statusList.Count < 30); Here we have another query, but this time it includes the MaxID property. The SinceID property prevents reading tweets that we’ve already read and Count specifies the largest number of tweets to return. Earlier, I mentioned how it was important to check how many tweets were returned because failing to do so will result in an exception when subsequent code runs on an empty list. The code above protects against this problem by only working with the results if Twitter actually returns tweets. Reasons why there wouldn’t be results include: if the first query got all the new tweets there wouldn’t be more to get and there might not have been any new tweets between the SinceID and MaxID settings of the most recent query. The code for loading the returned tweets into statusList and getting the maxID are the same as previously explained. The important point here is that MaxID is being reset, not SinceID. As explained in the Twitter documentation, paging occurs from the newest tweets to oldest, so setting MaxID lets us move from the most recent tweets down to the oldest as specified by SinceID. The two loop conditions cause the loop to continue as long as tweets are being read or a max number of tweets have been read.  Logically, you want to stop reading when you’ve read all the tweets and that’s indicated by the fact that the most recent query did not return results. I put the check to stop after 30 tweets are reached to keep the demo from running too long – in the console the response scrolls past available buffer and I wanted you to be able to see the complete output. Yet, there’s another point to be made about constraining the number of items you return at one time. The Twitter API has rate limits and making too many queries per minute will result in an error from twitter that LINQ to Twitter raises as an exception. To use the API properly, you’ll have to ensure you don’t exceed this threshold. Looking at the statusList.Count as done above is rather primitive, but you can implement your own logic to properly manage your rate limit. Yeah, I cheated again. Summary Now you know how to use LINQ to Twitter to work with Twitter timelines. After reading this post, you have a better idea of the role of SinceID - the oldest tweet already received. You also know that MaxID is the largest tweet ID to retrieve in a query. Together, these settings allow you to page through results via one or more queries. You also understand what factors affect the number of tweets returned and considerations for potential error handling logic. The full example of the code for this post is included in the downloadable source code for LINQ to Twitter.   @JoeMayo

    Read the article

  • RDA and Fusion Middleware Diagnostic Framework Integration

    - by Daniel Mortimer
    Further to my last blog entry "FMw Diagnostic Framework : Automatic Capture of Diagnostic Data Upon First Failure!" I have spent some time exploring how Remote Diagnostic Agent (RDA) integrates with Diagnostic Framework. Remote Diagnostic Agent, by default, collects the information about the last 10 incidents which have been captured in the Automatic Diagnostic Repository (ADR). This information can be located via RDA's Start Page Menu system. See screenshot below. Screenshot - Viewing Diagnostic Framework Incident Information in a RDA Package Note: In the next release of RDA - version 4.30 - the Diagnostic Repository menu label will have it's own position in the weblogic managed server sub menu hierarchy rather than be a child menu item of the logs menuDiagnostic Framework is also capable of launching RDA engine and including the output in an incident package. This is achieved via the command "IPS GENERATE PACKAGE". The RDA output is written to DOMAIN_HOME/servers/<server name>/adr/diag/ofm/<domain name>/<server name>/incpkg/pkg_[number]/seq_[number]/rda The RDA collected files are best viewed (as shown in the screenshot above) by opening the RDA Start Page - "DFW__start.htm" - from this directory in a browser. If you do not have a browser available on the host machine, zip the contents of the rda directory and transfer and extract to a machine which has a browser.  The explanation of the integration goes a little deeper. If you want to know more, read My Oracle Support document: Understanding RDA and FMW 11g Diagnostic Framework Integration [Document 1503644.1]

    Read the article

  • Generating documents with templating from a form

    - by Anna
    Hello, I would like to create a document generator with templating. The workflow should be as following: The user input data to a static form (simple text input). The user chooses a graphically designed template. A document with the chosen template containing the user data is generated. The initial templates repository is prepared in advance, but it should be easy to add new templates to the process. I have the full MS Office suite and the preferred file format is an MS .doc. I can do a little VB scripting if needed, but I prefer not to. Any advice would be greatly appreciated. Thank you, Anna

    Read the article

  • Hot Off the Press - Oracle Exadata: A Data Management Tipping Point

    - by kimberly.billings
    Advances in data-management architecture - including CPU, memory, storage, I/O, and the database - have been steady but piecemeal. In this report, Merv Adrian describes how Oracle Exadata not only provides the latest technology in each part of the data-management architecture, but also integrates them under the full control of one vendor with a unified approach to leveraging the full stack. He writes, "the real "secret sauce" of Oracle Exadata V2 is the way in which these technologies complement each other to deliver additional performance and scalability." Merv interviews two Exadata customers, Banco Transylvania and TUI Netherlands, and concludes that early indications are that Oracle Exadata is delivering on its promise of extreme performance and scalability. His recommendation to IT is to target corporate applications with the biggest potential for speed-based enhancement, and consider whether Oracle Exadata V2 can cost-effectively enable new ways to use these for competitive advantage. Read the full report. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • problems with installation of ubuntu 12.04 (64 and 32 bits)

    - by user76104
    I am a latinoamerican so.. sorry for my english I am trying to install ubuntu on my new pc but I have serious problems. When I put the installation cd and it runs, everything goes fine until that "window" when the user have to decide if want to try ubuntu or install it on the machine.. well that "window" apears in blank, and I see that my mouse and my keyboard turn really slowly and I can't do anything I have to shut it down by pressing power botton. The specification of my pc are this Gigabyte ex58 ud7 i7 950 wester digital caviar black 6gb memory corsair evga gtx580 Please i really neeed to install ubuntu or another linux distrubution, i am usign de sismic unix program on my laptop and well. it burns when i program a script! help! please :) what can i do..? what i have to download? i am not a pro in linux so please :( be pattient with me when it starts to load the achieve ubuntu environment to enter a menu with a graphical environment but I managed to get something sturdy and gave him to try ubuntu, started to load but freezes on the sign of ubuntu with red spots and only shows me the pointer not more ..... any other option? ps: had previously tried to install the alternate version of ubuntu not having success: S

    Read the article

  • juju bootstrap key error?

    - by mxdog
    I followed the guides the best as i could. there is some information missing like do as root or as regular user ..formatting, quotes no quotes..what the password or paraphrase for ssh should be ..couple other trivial things. that im sure add up to where im at (stuck!) so i have a working mass server on mx-nas-01 and 3 nodes and have been trying to get juju to start. I have tried this as both root and my regular account, sudo no-sudo and here is the output i get from juju bootstrap (however i try it) I don't know if this could be a host,domain,account issue or what i will mention i did have to shut off my dhcp server and install masq (all defaults) to get the maas pxe to work for the nodes mxdog@mx-nas-01:/$ juju bootstrap Traceback (most recent call last): File "/usr/bin/juju", line 8, in main(sys.argv[1:]) File "/usr/lib/python2.7/dist-packages/juju/control/init.py", line 183, in main env_config.load_or_write_sample() File "/usr/lib/python2.7/dist-packages/juju/environment/config.py", line 229, in load_or_write_sample self.load() File "/usr/lib/python2.7/dist-packages/juju/environment/config.py", line 115, in load self.parse(file.read(), path) File "/usr/lib/python2.7/dist-packages/juju/environment/config.py", line 138, in parse config = SCHEMA.coerce(config, []) File "/usr/lib/python2.7/dist-packages/juju/lib/schema.py", line 266, in coerce new_dict[k] = self.schema[k].coerce(v, path) File "/usr/lib/python2.7/dist-packages/juju/lib/schema.py", line 233, in coerce new_subvalue = self.value_schema.coerce(subvalue, value_path) File "/usr/lib/python2.7/dist-packages/juju/lib/schema.py", line 301, in coerce return self.schemas[selected].coerce(value, path) KeyError: 'mass' environments: maas: type: mass mass-server: http://192.168.0.30:80/MAAS mass-oauth: tDRdtJeEKVARBh93eT:N5dK5HSZBsA45cBdx9:S8wMNrfkT9PeYvQN9YrnbHGxmKARv8vb admin-secret: ########## default-series: precise

    Read the article

  • Ad-hoc String Manipulation With Visual Studio

    - by Liam McLennan
    Visual studio supports relatively advanced string manipulation via the ‘Quick Replace’ dialog. Today I had a requirement to modify some html, replacing line breaks with unordered list items. For example, I need to convert: Infrastructure<br/> Energy<br/> Industrial development<br/> Urban growth<br/> Water<br/> Food security<br/> to: <li>Infrastructure</li> <li>Energy</li> <li>Industrial development</li> <li>Urban growth</li> <li>Water</li> <li>Food security</li> This cannot be done with a simple search-and-replace but it can be done using the Quick Replace regular expression support. To use regular expressions expand ‘Find Options’, check ‘Use:’ and select ‘Regular Expressions’ Typically, Visual Studio regular expressions use a different syntax to every other regular expression engine. We need to use a capturing group to grab the text of each line so that it can be included in the replacement. The syntax for a capturing group is to replace the part of the expression to be captured with { and }. So my regular expression: {.*}\<br/\> means capture all the characters before <br/>. Note that < and > have to be escaped with \. In the replacement expression we can use \1 to insert the previously captured text. If the search expression had a second capturing group then its text would be available in \2 and so on. Visual Studio’s quick replace feature can be scoped to a selection, the current document, all open documents or every document in the current solution.

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >