Search Results

Search found 23648 results on 946 pages for 'tab size'.

Page 39/946 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • MS Office Excel Ribbon - Cannot change/hide Editing group in Home tab

    - by A9S6
    I have a .net addin for Excel. The addin creates the Ribbon UI for Excel 2007 and re-purposes some existing commands such as Cut, Copy, Paste, Sort etc. For Cut, Copy and Paste I am just overriding their OnAction value to call my own procedure when the buttons are clicked. But for Sort, Sort Asc and Sort Desc commands the case is a little different. When either of the Sort, Sort Asc or Sort Desc buttons are clicked, I want to get notified and then call the default functionality. This was possible in Excel 2003 commandsbars by calling the Execute() method on the CommandBarControl. In Excel 2007, there is a ExecuteMso() method to programmatically click a ribbon element but when the OnAction is overridden, this ExecuteMso() method just executes my own procedure and not the default functionality of that button. So I thought that I will HIDE the Sort buttons in the "Editing" group in Home tab and add my own Sort, Sort Asc and Sort Desc buttons to it. The buttons will call into my procedure first from where I will call the default behavior. Now the problem is that I am unable to change/hide the Editing group (idMso="GroupEditing"). Is this built-in group not editable? I can however HIDE the Clipboard and other groups(but can't add buttons to them). <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <customUI xmlns="http://schemas.microsoft.com/office/2006/01/customui"> <ribbon> <tabs> <tab idMso="TabHome"> <group idMso="GroupEditing" visible="false" /> </tab> </tabs> </ribbon> </customUI>

    Read the article

  • Hibernate Exception Fixed By Alt+Tab

    - by Lee Theobald
    Hi all, I've got a very curious problem in Hibernate that I would like some opinions on. In my code if I do the following: Go to page A Click a link on page A to be taken to page B Click on data item on page B Exception thrown I get an error telling me: failed to lazily initialize a collection of role: XYZ, no session or session was closed Fair enough. But when I do the same thing but add an alt+tab in the middle, everything is fine. E.g. Go to page A Click a link on page A to be taken to page B Hit ALt+Tab to switch to another application Hit ALt+Tab to switch back to the web browser Click on data item on page B Everything is fine. I'm a little confused as to how switching focus from my application makes it act as I want it to. Does anyone have any light to shine on the subject? I don't think it's a locking issue as even if I do the second set of steps quicker than the first, still no error. It's a Seam application using Hibernate 3.3.2.GA & 3.4.0.GA.

    Read the article

  • Asynchronously get user data in facebook tab?

    - by Kristoffer Nolgren
    Using the php sdk, I check if a user inside a tab likes the corresponding page. If i put the following code inside index.php and use that page as my page-tab-url, <?php require_once("facebook/facebook.php"); // Create our application instance // (replace this with your appId and secret). $facebook = new Facebook(array( 'appId' => '1399475990283166', 'secret' => 'mysercret', 'cookie' => true )); $signed_request = $facebook->getSignedRequest(); echo $signed_request['page']['liked']; ?> it outputs '1'. I would like to achieve this asynchronously instead, so I put the php in a separate file and try to access it using ajax instead $http.post('/facebook/likes.php'). success(function(data){ console.log(data); }).error(function(data){ console.log(data); } ); This sample is using angular, but what javascript library i'm using probably doesn't matter. When I access the info with javascript Facebook doesn't seem to get the info that I liked the page. Adding a print_r($facebook); on the page I'm retreiving the same values as if i'm not in a facebook-tab: ( [sharedSessionID:protected] => [appId:protected] => 1399475990283166 [appSecret:protected] => 679fb0ab947c2b98e818f9240bc793da [user:protected] => [signedRequest:protected] => [state:protected] => [accessToken:protected] => [fileUploadSupport:protected] => [trustForwarded:protected] => ) Can I access theese values asynchronosly somehow?

    Read the article

  • Save Cookies, Then Open Link in New Tab

    - by speedplane
    I have some javascript code that saves a cookie. However, if after saving the cookie, the user opens a new tab, it appears that the cookie is not saved. The new tab is on the same domain. Here is my cookie setting/getting code: function setCookie(c_name,value,exdays) { var exdate=new Date(); exdate.setDate(exdate.getDate() + exdays); var c_value=escape(value) + ((exdays==null) ? "" : "; expires="+exdate.toUTCString()); document.cookie=c_name + "=" + c_value; } function getCookie(c_name) { var i,x,y,ARRcookies=document.cookie.split(";"); for (i=0;i<ARRcookies.length;i++) { x=ARRcookies[i].substr(0,ARRcookies[i].indexOf("=")); y=ARRcookies[i].substr(ARRcookies[i].indexOf("=")+1); x=x.replace(/^\s+|\s+$/g,""); if (x==c_name) { return unescape(y); } } } If some javascript calls setCookie('mycookie', 1) and then the user clicks on a link where the _target is set to _blank, the cookie does not load in the new tab. So getCookie('mycookie') will not return 1. What is the problem here?

    Read the article

  • Set default tab url in firefox 14

    - by sebster
    In the latest firefox update, new tabs show -instead of the previously default blank page- a window of recently viewed pages. Before this was available, I had installed an 'addon' to allow this (called 'fvd speed dial'). It worked fine however I have since delete.d this as it is no longer needed but still loads the page where the addon was housed:'chrome://fvd.speeddial/content/fvd_about_blank.html'. I have reinstalled firefox yet the same problem still occurs. On the 'about:config' page I have found the setting 'browser.newtab.url' but do not know the default url. Is there any way to remedy this? I will just add, I appologise if this is not the case with the new tab feature. It is all I have gathered from the firefox update page. Also, I do not want to, ideally, simply restore my settings as I have changed some of them (such as the search bar, that work fine. I am on windows-xp, home edition. Not sure of what service pack.

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Allow opening a new tab with Ctrl+T on all websites in Firefox

    - by Martin J.H.
    In Firefox, certain websites and plugins (Adobe PDF Plugin) appear to "capture" the Control key, so that when I try to open a new tab using "Ctrl+t", nothing happens - or worse, something unexpected happens. Examples: On the Codecademy site, while editing code, Ctrl+T either does nothing, or (when Flash is disabled) switches the position of the two characters next to the cursor. When viewing PDF's with the Adobe PDF Plugin, Ctrl+T does nothing. Is there a way to disable this "feature"? I would like "Ctrl+t" to always "talk" to Firefox! Edit: After searching superuser deeper, this question is very similar to the questions: "How to prevent keystroke grabbing/hijacking by websites in Firefox?" "How do I prevent pages I visit from overriding selected Firefox shortcut keys?". The answers to these questions are interesting and relevant, but do not give a method on how to disable combinatinos such as "Ctrl+t". Maybe a modified Greasemonkey script is the easiest solultion. Edit 2 - Attempt at a solution The following UserScript (Use GreaseMonkey to install it) successfully captures Ctrl+t on some sites (Google Search site, for instance - PopUp "Gotcha" appears), but not on the Codecademy site. I found another question pertaining to this subject here: "How to forbid keyboard shortcut stealing by websites in Firefox". It was raised in 2010, and the consensus was: It can't be done. // ==UserScript== // @name Disable Ctrl T interceptions // @description Stop websites from highjacking keyboard shortcuts // // @run-at document-start // @include * // @grant none // ==/UserScript== // Keycode for 't'. Add more to disable other ctrl+X interceptions keycodes = [84]; var lastPressedButton = [0]; document.addEventListener('keydown', function(e) { //uncomment to find out the keycode for any given key // alert(e.keyCode ); if (keycodes.indexOf(e.keyCode) != -1 && e.ctrlKey) { e.cancelBubble = true; e.stopImmediatePropagation(); alert("Gotcha!"); } return false; });

    Read the article

  • Is big (as much as big) size display (Monitor) always better for Development?

    - by Jitendra Vyas
    Is bigger size display ( Monitor) always better for Development? I'm going to buy a new LCD Monitor. I mostly work in Adobe Photoshop, HTML, CSS, jQuery and Wordpress. Budget is not a problem. Many options are there for LCD Monitor SIZE My questions are Would it better for maximum size, or large size monitor are not good always? Would it better to buy 21.5 inch x 2 than one 30 inch monitor? Which monitor size would you would prefer between the size of 21.5 inch - 30 inch, if bugdet is not a problem?

    Read the article

  • Windows service fails to start with local user until password is entered again in logon tab

    - by Nick
    Basically we have a service where we use a local account as its logon. it has all the proper permissions, and everything is working fine, service starts and runs and all is good. Then one day, after rebooting, the service fails to start. Logs show incorrect password. Our technicians resolve the issue by simply retyping the password into the "Log On" tab from the services.msc. Unfortunately we have not been able to root cause. I suspect that the password that is stored for the service is lost somehow. Does anyone know where the password hash might be stored so we can check it? The only activities that seem to be possibly related are patching with Microsoft security patches, but we have multiple servers running the same service, and we have never seen more than one at a time, and its usually a different one each time when this occurrs. I believe this to be the same issue as this: Windows service fails to start with custom user until started once with local user But i was unable to add comments, and its really old.

    Read the article

  • How to determine the size of a package in terminal prior to downloading?

    - by user14590
    When using apt-get install <package_name>, and there are dependencies that need to be downloaded, the terminal outputs names of additional packages and total size, and asks for confirmation before downloading. But, when dependencies are satisfied and nothing but the named package needs to be downloaded there is no size output and no confirmation. When using Synaptic, I can see the total size that new packages that will use after installation but no way to see the size that needs to be downloaded, except to go from package to package and use properties to see the compressed size. I would like to know if there is a way to see the size of a package(s) in terminal and Synaptic prior to downloading and installing it/them?

    Read the article

  • IIS Application Pool Memory Size Problem

    - by Roni
    I increased my application pool memory size from default to 500 mb. and i have IIS 7.5. My server sometimes falling down (service unavailable) and i don't know the reason. I did couple of changes at the same day that i changed memory size in iis and from that days i am getting this problem in one of my servers. Is there anybody can tell me what is the right way to increase memory and what can be the problems???? Thankss Roni

    Read the article

  • Exchange 2010 EMS - Total size of users mailboxes within a particular OU

    - by Moif Murphy
    I'm doing some massive DB cleanups at the moment. We have two DBs both approaching 400GB and I'm wanting to split the DB's into departments. To do that I need to know the total size of mailboxes within an OU. I've run this: http://stackoverflow.com/questions/9796101/exchange-listing-mailboxes-in-an-ou-with-their-mailbox-size but this only gives me a list and I need a combined totalitemsize so know how big I need the new DB's to be. Thanks

    Read the article

  • apache/nginx html file size limit

    - by Daniel
    When serving/sending HTML files to a users browser, where can I reconfigure this size limit? I want to send an extremely large html files to users via apache and nginx. Files are being truncated in apache/nginx, what setting determines the file size?

    Read the article

  • Exchange 2007 | Mailbox DB Size 180GB

    - by rihatum
    Hi All, I have a Exchange 2007 SP1 server running on Windows 2008 6 HD Drives in a RAID-1 OS, DB, Logs on separate RAID-1 Disks Size of the Mailbox Database is 183GB and increasing We only have First Storage Group and Second Storage Group There is no more space on the server to install new Physical Disks and create a Storage Group Q - Can I resize the RAID-1 Partition where the DB is ? Q - Any other suggestions as to how I can decrease the Mailbox DB Size ? Will be grateful for your suggestions on this. Kind Regards

    Read the article

  • APC PHP cache size does not exceed 32MB, even though settings allow for more

    - by hardy101
    I am setting up APC (v 3.1.9) on a high-traffic WordPress installation on CentOS 6.0 64 bit. I have figured out many of the quirks with APC, but something is still not quite right. No matter what settings I change, APC never actually caches more than 32MB. I'm trying to bump it up to 256 MB. 32MB is a default amount for apc.shm_size, so I am wondering if it's stuck there somehow. I have run the following echo '2147483648' > /proc/sys/kernel/shmmax to increase my system's shared memory to 2G (half of my 4G box). Then ran ipcs -lm which returns ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 2097152 max total shared memory (kbytes) = 8388608 min seg size (bytes) = 1 Also made a change in /etc/sysctl.conf then ran sysctl -p to make the settings stick on the server. Rebooted, too, for good measure. In my APC settings, I have mmap enabled (which happens by default in recent versions of APC). php.ini looks like: apc.stat=0 apc.shm_size="256M" apc.max_file_size="10M" apc.mmap_file_mask="/tmp/apc.XXXXXX" apc.ttl="7200" I am aware that mmap mode will ignore references to apc.shm_segments, so I have left it out with default 1. phpinfo() indicates the following about APC: Version 3.1.9 APC Debugging Disabled MMAP Support Enabled MMAP File Mask /tmp/apc.bPS7rB Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Oct 11 2011 22:55:02 Directive Local Value apc.cache_by_default On apc.canonicalize O apc.coredump_unmap Off apc.enable_cli Off apc.enabled On On apc.file_md5 Off apc.file_update_protection 2 apc.filters no value apc.gc_ttl 3600 apc.include_once_override Off apc.lazy_classes Off apc.lazy_functions Off apc.max_file_size 10M apc.mmap_file_mask /tmp/apc.bPS7rB apc.num_files_hint 1000 apc.preload_path no value apc.report_autofilter Off apc.rfc1867 Off apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 256M apc.slam_defense On apc.stat Off apc.stat_ctime Off apc.ttl 7200 apc.use_request_time On apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock On apc.php reveals the following graph, no matter how long the server runs (cache size fluctuates and hovers at just under 32MB. See image http://i.stack.imgur.com/2bwMa.png You can see that the cache is trying to allocate 256MB, but the brown piece of the pie keeps getting recycled at 32MB. This is confirmed as refreshing the apc.php page shows cached file counts that move up and down (implying that the cache is not holding onto all of its files). Does anyone have an idea of how to get APC to use more than 32 MB for its cache size?? **Note that the identical behavior occurs for eaccelerator, xcache, and APC. I read here: http://www.litespeedtech.com/support/forum/archive/index.php/t-5072.html that suEXEC could cause this problem.

    Read the article

  • How to log size of cookies in request header with apache

    - by chrisst
    We have an issue on our site with cookies growing too large. We have already expanded the acceptable header size and throttled the cookie sizes for now, but I'd like to figure out what the average client's header sizes are, specifically of the cookies. I've created an apache log that captures the cookies being set on each request: LogFormat "%{Cookie}i" cookies But this just spits out the entire contents of all cookies in the header. Is there a way to have apache just log the size (or just length of the string) per request?

    Read the article

  • File size limit exceeded in bash

    - by yboren
    I have tried this shell script on a SUSE 10 server, kernel 2.6.16.60, ext3 filesystem the script has problem like this: cat file | awk '{print $1" "$2" "$3}' | sort -n > result the file's size is about 3.2G, and I get such error message: File size limit exceeded in this shell, ulimit -f is unlimited after I change script into this cat file | awk '{print $1" "$2" "$3}' >tmp sort -n tmp > result the problem is gone. I don't know why, can anyone help me with an explanation?

    Read the article

  • Size of a sharepoint web application

    - by Indra
    How do you figure out the current size of the sharepoint web application? Better yet, the size of a site collection or a subsite. I am planning to move a site collection from one farm to another. I need to plan the storage capacity first.

    Read the article

  • Automatic picture size adjustment

    - by CChriss
    Does anyone know of a free utility that allows you to paste into it a graphics file (any type would work for me, jpg, bmp, png, etc) and it will size the file to within a preset size boundary? For instance, if I preset it to resize files to be a maximum of 400 wide by 300 tall, and I paste in a file 500x500, it would shrink the file to fit within the 300 tall limit. Thanks.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >