Search Results

Search found 30742 results on 1230 pages for 'folder size'.

Page 45/1230 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Text Box size is different in IE 6 and FireFox 3.6

    - by user299873
    I am facing issues with text box size when veiwing in Fire Fox 3.6. < input class="dat" type="text" name="rejection_reason" size="51" maxlength="70" onchange="on_change();" style is as: .dat { font-family : verdana,arial,helvetica; font-size : 8pt; font-weight : bold; text-align : left; vertical-align : middle; background-color : White; } Text box size in Fire Fox is bit smaller than IE6. Not sure why IE6 and FireFox displaying text box of diff size.

    Read the article

  • How to create a folder in Eclipse?

    - by Polaris878
    Hi, I'd like to create a folder under a package in Eclipse... The purpose of this folder is merely for organizational purposes. Meaning, I don't want it to be another package. Every time I try to add a folder under a package, it just creates a package instead. I'd like to have the following structure: project/src/package1/someClass.java project/src/package1/someFOLDER/anotherClass.java project/src/package1/package2/anotherFOLDER/oneOtherClass.java Is it possible to do this without adding a package? I come from a .NET/C# and C++ background... here I'd just add a folder and the reference to that class would be updated in the project. How can I just add an organizational folder in eclipse? Thanks

    Read the article

  • Changing text size on a ggplot bump plot

    - by Tom Liptrot
    Hi, I'm fairly new to ggplot. I have made a bumpplot using code posted below. I got the code from someones blog - i've lost the link.... I want to be able to increase the size of the lables (here letters which care very small on the left and right of the plot) without affecting the width of the lines (this will only really make sense after you have run the code) I have tried changing the size paramater but that always alter the line width as well. Any suggestion appreciated. Tom require(ggplot2) df<-matrix(rnorm(520), 5, 10) #makes a random example colnames(df) <- letters[1:10] Price.Rank<-t(apply(df, 1, rank)) dfm<-melt(Price.Rank) names(dfm)<-c( "Date","Brand", "value") p <- ggplot(dfm, aes(factor(Date), value, group = Brand, colour = Brand, label = Brand)) p1 <- p + geom_line(aes(size=2.2, alpha=0.7)) + geom_text(data = subset(dfm, Date == 1), aes(x = Date , size =2.2, hjust = 1, vjust=0)) + geom_text(data = subset(dfm, Date == 5), aes(x = Date , size =2.2, hjust = 0, vjust=0))+ theme_bw() + opts(legend.position = "none", panel.border = theme_blank()) p1 + theme_bw() + opts(legend.position = "none", panel.border = theme_blank())

    Read the article

  • Exchange 2003 Public Folder Replica list

    - by Niall
    Hi, I am trying to update a replica list on a Exchange 2003 public folder. I am using the WMI namespace exchange_publicfolder to try and add an Exchange server (using the servers DN) to the AddReplica procedure. Every time I run this I get an invalid parameter as an exception. Below is the code that I am using to do this. WMI.Connect(Server, credentials) Using WMISearcher As New ManagementObjectSearcher(WMI.Scope, & _ New ObjectQuery(String.Format("SELECT * FROM Exchange_Publicfolder WHERE path='{0}'", Name))) Using PublicFolder As ManagementObjectCollection = WMISearcher.Get For Each Folder As ManagementObject In PublicFolder Dim BaseFolder As ManagementBaseObject = Folder.GetMethodParameters("AddReplica") BaseFolder("path") = ServerDN Folder.InvokeMethod("AddReplica", BaseFolder, Nothing) Next End Using End Using I have used WMI before and I can see that the call is connecting to the correct public folder because i can itterate through the properies once the query has executed. I am not sure what I am doing wrong here. If anyone has any ideas or comments the please let me know. Thanks Niall

    Read the article

  • J2ME private folder(only accessible to my midlet)

    - by Shankar
    I have two midlets, one will download some files form server everyday and the other uses these files. If i download the files to a normal folder the mobile user may delete the folder or files manually. So i need a private folder which is hidden and only accessible for my midlets. I heard about private folders which symbian platform provides for each application which are not accessible to users. I need such a folder for my j2me app. How to create such folder?? Shankar

    Read the article

  • GIT: Checkout to a "Really" Specific Folder

    - by Rafid K. Abdullah
    I want to export, checkout, or whatever you call it from the index, HEAD, or any other commit, to a specific folder, how is that possible? Similar questions have already been asked: GIT: Checkout to a specific folder How to do a "git export" (like "svn export") But the problem with the proposed solution is that they preserve the relative path. So for example, if I use the mentioned method to check out the file nbapp/editblog.php to the folder temp, the file would be checked out in temp/nbapp/editblog.php! Is there anyway to checkout to 'temp' directly? Also, another important thing is to be able to check the HEAD or any other commit. The checkout-index (which allows using the --prefix option to checkout to a specific folder, while normal checkout doesn't allow) checks out only the index. What if I want to check out a file from a certain commit to a certain folder? A similar question has alread

    Read the article

  • RijndaelManaged Padding when data matches block size

    - by trampster
    If I use PKCS7 padding in RijndaelManaged with 16 bytes of data then I get 32 bytes of data output. It appears that for PKCS7 when the data size matches the block size it adds a whole extra block of data. If I use Zeros padding for 16 bytes of data I get out 16 bytes of data. So for Zeros padding if the data matches the block size then it doesn't pad. I have searched through the documentation and it says nothing about this difference in padding behavior. Can someone please point me to some kind of documentation which specifies what the padding behavior should be for the different padding modes when the data size matches the block size.

    Read the article

  • tfs : branch moved folder based on label or date

    - by Andy
    I've moved a folder in tfs using the "move" command but now I cannot create branches off the moved folder based on date or label (label was created when source was in the old folder). I can however create a branch based on "latest version". I get an error message "no items match in if I try to branch of a label. I'm guessing the label references files using the old folder before I moved it. I also get no files if I try to "get specific version" by either date or label. I've tried to roll back moving the folder but this gives me errors such as "An unexpected error occured".

    Read the article

  • Extract files from zip folder and store these files in blobstore

    - by Eng_Engineer
    i want to upload zip folder from file input in form the i want to extract the contents of this uploaded zip folder,and store the contents (files)of this zip in the blobstore in order to download them after putting these files in one folder,but the problem is that i can't deal with the zip folder directly(to read it), i tried as this: form = cgi.FieldStorage() file_upload = form['file'] zip1=file_upload.filename zipstream=StringIO.StringIO(zip1.read()) But the problem still that i can't read the zip as previous,also i tried to read zip folder directly like this: z1=zipfile.ZipFile(zip1,"r") But there was an error in this way.Please can any one help me.Thanks in advance.

    Read the article

  • Problem deleting folder.

    - by rajivpradeep
    Hi, I have two applications , one to run a program and the other to delete the folder containing the application. I call the application to delete the folder from within the application used to run the programs. But when i run the first app the program runs but the folder doesn't get deleted. When i run the deleting app separately, the folder gets deleted. what might be the solution. ? The deleting app is in separate folder.

    Read the article

  • GWT - Retrieve size of a widget that is not displayed

    - by Garagos
    I need to set the size of an absolutePanel regarding to its child size, but the getOffset* methods return 0 because (i think) the child as not been displayed yet. A Quick example: AbsolutePanel aPanel = new AbsolutePanel(); HTML text = new HTML(/*variable lenght text*/); int xPosition = 20; // actually variable aPanel.add(text, xPosition, 0); aPanel.setSize(xPosition + text .getOffsetWidth() + "px", "50px"); // 20px 50px I could also solve my problem by using the AbsolutePanel size to set the child position and size: AbsolutePanel aPanel = new AbsolutePanel(); aPanel.setSize("100%", "50px"); HTML text = new HTML(/*variable lenght text*/); int xPosition = aPanel.getOffsetWidth() / 3; // Once again, getOffsetWidth() returns 0; aPanel.add(text, xPosition, 0); In both case, i have to find a way to either: retrieve the size of a widget that has not been displayed be notified when a widget is displayed

    Read the article

  • Creating a folder inside Mac OS App

    - by Negative Zero
    I want a an app that is "self-contained" (I don't know if i use the right word. "putting the app into trash bin will remove everything" is what I meant). But the app requires some resources to run. I usually put those resources into a folder. I want to move those resources into the App folder ( package contents). Can I do that? Is it a good practice to do that? When I test the app directly running from Xcode, the App runs fine. But if i run it from finder, the app will say fails to create resources folder because permission denied. I checked the app's folder permission - User(me) has read/write access. I am wondering what is causing this different behavior. The last option is to use Application Support folder, but I don't want to leave trails when user deletes the app. Can someone help me out here?

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Is big (as much as big) size display (Monitor) always better for Development?

    - by Jitendra Vyas
    Is bigger size display ( Monitor) always better for Development? I'm going to buy a new LCD Monitor. I mostly work in Adobe Photoshop, HTML, CSS, jQuery and Wordpress. Budget is not a problem. Many options are there for LCD Monitor SIZE My questions are Would it better for maximum size, or large size monitor are not good always? Would it better to buy 21.5 inch x 2 than one 30 inch monitor? Which monitor size would you would prefer between the size of 21.5 inch - 30 inch, if bugdet is not a problem?

    Read the article

  • How to determine the size of a package in terminal prior to downloading?

    - by user14590
    When using apt-get install <package_name>, and there are dependencies that need to be downloaded, the terminal outputs names of additional packages and total size, and asks for confirmation before downloading. But, when dependencies are satisfied and nothing but the named package needs to be downloaded there is no size output and no confirmation. When using Synaptic, I can see the total size that new packages that will use after installation but no way to see the size that needs to be downloaded, except to go from package to package and use properties to see the compressed size. I would like to know if there is a way to see the size of a package(s) in terminal and Synaptic prior to downloading and installing it/them?

    Read the article

  • SYSPART hidden folder in Windows 7

    - by BenGC
    We had a user with a Lenovo G-series laptop getting a STOP error on boot. We reinstalled Windows 7 Home Premium using non-OEM media and restored the user's files from backup. We are now seeing a hidden folder in the root of the C:\ drive called SYSPART which appears to contain a copy of the contents of the C:\ drive - so while the user has 160 GB of files, the drive is using 320 GB because of this folder. What is it, and is it safe to delete?

    Read the article

  • admin can't view non admin user's folder in osx

    - by adolf garlic
    I'm trying to add a new keyboard layout for a non admin user on my mac. I had thought that the keyboard layout would be applied for all users when I added it to mine but alas no. I cannot get into the Users\\library\keyboard layouts folder, as it won't let me (but I'm an admin FFS!) I even went into 'get info' and set it to 'everyone read and write' but it still tells me that I don't have permission How on earth can I update the other user's keyboard layout folder?

    Read the article

  • Office 2010 can't open folder at my Skydrive

    - by mrbill.mp
    I followed the steps to upload documents to a folder at Microsofts Skydrive. Backstage,Share,Save to skydrive,(at this point it always shows Sorry, we are unable to connect to Skydrive.) Than I click the Try Again button and It connects. Then I click the folder I wish to put the document into. And click Save As. And I get (Could not open http://etc..........). Why?

    Read the article

  • Sync local folder with WebDAV

    - by daniels
    I have a local folder on my Mac that I want to sync with a WebDAV server. There are a lot of files in my folder. After I edit some files or add/remove folders, I want to be able to sync the changes to the WebDAV server, ignoring what it is on the server and always using my files. Is there any script or tool that I can use from command line to do that? And mounting the resource is not a solution.

    Read the article

  • nothing in dev folder

    - by 4321bust
    hi, i'm new to this so bear with me plz. im attempting to set up git on my mac and need to be using my dev folder. however, there seems to be nothing in my folder ("zero KB on disk") with no sub directories listed. other hidden directories are intact. i've never really gone this deep into things before so i'm not sure how/why anything would previously have been deleted. any help greatly appreciated.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >