Search Results

Search found 3700 results on 148 pages for 'strongly typed dataset'.

Page 102/148 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • json object composition details

    - by Ethan
    in .json text, is the 'value' in a basic single pair object the title of a value type (e.g. [string, number, object]), or a value for a typed object (e.g. 2, or "dog", or Object3)? This is how http://www.json.org/ presents the information: "An object is an unordered set of name/value pairs. An object begins with { (left brace) and ends with } (right brace). Each name is followed by : (colon) and the name/value pairs are separated by , (comma)."

    Read the article

  • Send keystrokes to TextBox

    - by tanascius
    I have a derived TextBox where I intercept the userinput to manipulate it. However I have to preserve the original typed input. So my idea was to hold an inner TextBox within my derived class and send the user's input to that TextBox before manipulating it. The reason for this approach is that I do not want to take care of all this special actions like: typing something, ctrl+a, [del], type something else, [backspace] and so on ... However I do not know how to send a single keystroke (keycode, ascii, char) to a TextBox. Maybe you have another idea without an inner TextBox at all? Thank you!

    Read the article

  • How do I `SUM` by multiple columns in Excel

    - by dwwilson66
    I have a comma delimited file that includes two columns date/time (which imports as Excel's mm/dd/yyyy hh:mm custom format) and status of 1 or 0. The status represents a piece of equipment either being on or off. I'm trying to generate a graph that will show, hours up vs. down by day. CONSIDER: 1/1/2012 00:00, 1 1/1/2012 03:00, 0 1/1/2012 14:00, 1 1/3/2012 00:00, 0 This tells me that the equipment was up for three hours, down for eleven hours, and then up for thirty-four hours (across two calendar days). However, I would like to generate a graph that shows how many hours PER DAY we were up or down. CONSIDER: 1/1 XXXXXXXXXXXXX----------- (up 13, down 11) 1/2 XXXXXXXXXXXXXXXXXXXXXXXX (up 24) To me, it seems that I need to generate a dataset summing HOURS by STATUS by CALENDAR DAY...but I can't seem to find a flavor of pivot table or nested SUM(IF(SUMIF(...))) combination to make it work. Most troubling is accounting for date changes...in my example above, since my uptime starting at 14:00 on 1/1/2012 crosses midnight, I need to know that 10 uptime hours get totalled with 1/1/2012 and 24 uptime hours get totalled with 1/2/2012. I may be able to do something with a calendar list to drive the date summation, but then I need a way to compare 01/01/2012 to 01/01/2012 03:00 as equal. There's got to be a way along the lines of if(INTEGER-PORTIONS-OF-SERIAL-DATES-ARE-EQUAL,TOTAL-HOURS-IF-VALUE-IS_1,0) but nothing's worked so far. Any suggestions? I've been battling this most of the day, and need a fresh perspective. Thanks

    Read the article

  • Parsing a String into date with pattern:"dd/MM/yyyy"

    - by kawtousse
    Hi, I want to insert a date having this format MM/dd/YYYY for example:04/29/2010 to 29/04/2010 to be inserted into mysql database in a field typed Date. So i have this code: String dateimput=request.getParameter("datepicker"); DateFormat df = new SimpleDateFormat("dd/MM/yyyy"); Date dt = null; try { dt = df.parse(dateimput); System.out.println("date imput is:" +dt); } catch (ParseException e) { e.printStackTrace(); } but it gives me those error: 1-date imput is:Fri May 04 00:00:00 CEST 2012 (it is not the correct value that have been entered). 2-dismatching with mysql date type. I can not detect the error exactly. Please help.

    Read the article

  • Cannot find code completion and suggestions in Aptana 2.0

    - by Benjamin
    Greetings, I recently started using Aptana 1.5 to work on a site via FTP, developing pages in .php format with a mixture of HTML, CSS, jQuery, and -- naturally -- PHP. I develop a page with all of these elements together to save time, then separate them. One thing that I really liked about Aptana 1.5 was that it completed code (brackets, parentheses, etc) and suggested things as I typed. I upgraded to 2.0 today and cannot figure out for the life of me how to bring back all of these capabilities -- or remember if I had to do anything in 1.5 to get them. Here is a picture of what I am looking for Notice the suggestion for margin: and the "HTML, Js, CSS" up top. Any help with getting this to work in 2.0 would be greatly appreciated. Ever since installing 2.0, 1.5 has been throwing error messages and I can't seem to get my FTP settings to stay locked in.

    Read the article

  • Understanding tcptraceroute versus http response

    - by kojiro
    I'm debugging a web server that has a very high wait time before responding. The server itself is quite fast and has no load, so I strongly suspect a network problem. Basically, I make a web request: wget -O/dev/null http://hostname/ --2013-10-18 11:03:08-- http://hostname/ Resolving hostname... 10.9.211.129 Connecting to hostname|10.9.211.129|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: ‘/dev/null’ 2013-10-18 11:04:11 (88.0 KB/s) - ‘/dev/null’ saved [13641] So you see it took about a minute to give me the page, but it does give it to me with a 200 response. So I try a tcptraceroute to see what's up: $ sudo tcptraceroute hostname 80 Password: Selected device en2, address 192.168.113.74, port 54699 for outgoing packets Tracing the path to hostname (10.9.211.129) on TCP port 80 (http), 30 hops max 1 192.168.113.1 0.842 ms 2.216 ms 2.130 ms 2 10.141.12.77 0.707 ms 0.767 ms 0.738 ms 3 10.141.12.33 1.227 ms 1.012 ms 1.120 ms 4 10.141.3.107 0.372 ms 0.305 ms 0.368 ms 5 12.112.4.41 6.688 ms 6.514 ms 6.467 ms 6 cr84.phlpa.ip.att.net (12.122.107.214) 19.892 ms 18.814 ms 15.804 ms 7 cr2.phlpa.ip.att.net (12.122.107.117) 17.554 ms 15.693 ms 16.122 ms 8 cr1.wswdc.ip.att.net (12.122.4.54) 15.838 ms 15.353 ms 15.511 ms 9 cr83.wswdc.ip.att.net (12.123.10.110) 17.451 ms 15.183 ms 16.198 ms 10 12.84.5.93 9.982 ms 9.817 ms 9.784 ms 11 12.84.5.94 14.587 ms 14.301 ms 14.238 ms 12 10.141.3.209 13.870 ms 13.845 ms 13.696 ms 13 * * * … 30 * * * I tried it again with 100 hops, just to be sure – the packets never get there. So how is it that the server does respond to requests via http, even after a minute? Shouldn't all requests just die? I'm not sure how to proceed debugging why this server is slow (as opposed to why it responds at all).

    Read the article

  • Storing n-grams in database in < n number of tables.

    - by kurige
    If I was writing a piece of software that attempted to predict what word a user was going to type next using the two previous words the user had typed, I would create two tables. Like so: == 1-gram table == Token | NextWord | Frequency ------+----------+----------- "I" | "like" | 15 "I" | "hate" | 20 == 2-gram table == Token | NextWord | Frequency ---------+------------+----------- "I like" | "apples" | 8 "I like" | "tomatoes" | 12 "I hate" | "tomatoes" | 20 "I hate" | "apples" | 2 Following this example implimentation the user types "I" and the software, using the above database, predicts that the next word the user is going to type is "hate". If the user does type "hate" then the software will then predict that the next word the user is going to type is "tomatoes". However, this implimentation would require a table for each additional n-gram that I choose to take into account. If I decided that I wanted to take the 5 or 6 preceding words into account when predicting the next word, then I would need 5-6 tables, and an exponentially increase in space per n-gram. What would be the best way to represent this in only one or two tables, that has no upper-limit on the number of n-grams I can support?

    Read the article

  • Tag-like autocompletion and caret/cursor movement in contenteditable elements.

    - by jimeh
    I'm working on a jQuery plugin that will allow you to do @username style tags, like Facebook does in their status update input box. My problem is, that even after hours of researching and experimenting, it seems REALLY hard to simply move the caret. I've managed to inject the <a> tag with someone's name, but placing the caret after it seems like rocket science, specially if it's supposed work in all browsers. And I haven't even looked into replacing the typed @username text with the tag yet, rather than just injecting it as I'm doing right now... lol There's a ton of questions about working with contenteditable here on Stack Overflow, and I think I've read all of them, but they don't really cover properly what I need. So any more information anyone can provide would be great :)

    Read the article

  • How to make a button button type (custom keyboard)

    - by Forrest
    I am creating a Spanish application in C# which will help first year students at my high school. I want to create a "custom keyboard" for characters that cannot be easily typed (Á É Í Ó Ú Ñ Ü ¡ ¿ á é í ó ú ñ ü). I was just thinking of making buttons across the bottom of the screen which would add that character to the text field when pressed. I have not be able to find anything of use. Thanks in advance

    Read the article

  • Types in Bytecode

    - by HH
    Hey everyone, I've been working for some time on (Java) Bytecode, however, it had never occurred to me to ask why are some instructions typed? I understand that in an ADD operation, we need to distinguish between an integer addition and a FP addition (that's why we have IADD and FADD). However, why do we need to distinguish between ISTORE and FSTORE? They both involve the exact same operation, which is moving 32 bits from the stack to a local variable position? The only answer I can think of is for type-safety, to prevent this: (ILOAD, ILOAD, FADD). However, I believe that type-safety is already enforced at the Java language level. OK, the Class file format is not directly coupled with Java, so is this a way to enforce type-safety for languages that do not support it? Any thought? Thank you.

    Read the article

  • Accidentally deleted /opt/local/bin without backup. Any help?

    - by Aaron
    Hi all, I'm on a Mac OS X 10.5.8 I was recently uninstalling mysql5 from /opt/local/bin. I typed: rm -rf /opt/local/bin mysql* instead of rm -rf /opt/local/bin/mysql* This deleted my entire /opt/local/bin directory which puts me in a bit of a bind. Is there any way to recover these files? If not, I have a friend that is using a similar set of programs, would it be possible to use the contents of his folder? If I end up needing to re-install everything in this folder, what is the best way to go about doing this? Thanks in advance!

    Read the article

  • AppEngine dev_appserver.py not showing any outputs

    - by shin
    I installed Python2.6 and Google App Engine (GAE). I realized that GAE does not run on 2.6, so I installed 2.5 as well. Now I have a very basic code as follows and it does not show on the localhost:8080 I typed the following in cmd.exe under my dir testapps. c:\Users\myname\testapps"\Program Files\Google\google_appengine\dev_appserver.py" helloworld I am hoping someone lead me to the right direction. helloworld/helloworld.py print 'Content-Type: text/plain' print '' print 'Hello, world!' helloworld/app.yaml application: helloworld version: 1 runtime: python api_version: 1 handlers: - url: /.* script: helloworld.py

    Read the article

  • Subsonic 3 : get only certain columns

    - by CTGA
    Hello, I use : Subsonic 3, SQL Server 2008, Json.Net Here is my problem : I have a house table with id,lot number, address, location, ownerid and an owner table with id,name, description. I'd like to get only the owner name and address of house where location is "San Francisco". How can I do this with Subsonic? My problem is I can only get a typed list or a datareader, I don't understand how to only get an array or a simple list. What is the best solution? Create a custom class ? Create a View ? Anything else ? My aim is to then serialize the data to send it back to my application, but serialization is seriously long because of the numerous relationships (My example here is simplified, there are indeed : house - owner - city - country etc.), whereas I need only 2 fields... Thank you

    Read the article

  • .NET generic class instance - passing a variable data type

    - by FerretallicA
    As the title suggests, I'm tyring to pass a variable data type to a template class. Something like this: frmExample = New LookupForm(Of Models.MyClass) 'Works fine Dim SelectedType As Type = InstanceOfMyClass.GetType() 'Works fine frmExample = New LookupForm(Of SelectedType) 'Ba-bow! frmExample = New LookupForm(Of InstanceOfMyClass.GetType()) 'Ba-bow! LookupForm<Models.MyClass> frmExample; Type SelectedType = InstanceOfMyClass.GetType(); frmExample = new LookupForm<SelectedType.GetType()>(); //Ba-bow frmExample = new LookupForm<(Type)SelectedType>(); //Ba-bow I'm assuming it's something to do with the template being processed at compile time but even if I'm off the mark there, it wouldn't solve my problem anyway. I can't find any relevant information on using Reflection to instance template classes either. (How) can I create an instance of a dynamically typed repository at runtime?

    Read the article

  • YUI Autocomplete: search after pasting?

    - by rassie
    I'm using an auto-complete widget from YUI to implement live search as in the examples. However, it works fine when search text is typed in, but fails to work when the text is pasted into the field. Which would be the proper way to initiate an autocompletion on paste? Haven't found anything for that in the documentation... EDIT: Pasting is not Ctrl-V, it's usually "Paste" from the context-menu. YUI does react to a keypress, but doesn't if anything is pasted by the mouse.

    Read the article

  • installing glassfish on ubuntu karmic using synaptics package manager

    - by Danny
    I'm trying to learn to use glassfish for the first time. My IDE is netbeans and I've installed the glassfish plugin for netbeans. I opened up synaptics package manager and typed in glassfish. My choices were imqv2 glassfish-activaton glassfish-mail glassfish-appserv glassfish-toplink-essentials glassfish-jmac-api glassfish-javaee I'm not sure what is in each package, or which package are needed. I can't seem to find anything that tells me anything descriptive about these packages. I've seen a lot of tutorials on how to install glassfish, but I'd prefer to use apt-get / synaptics to install glassfish so that syntactics can take care of updating.

    Read the article

  • problem with squid and ecap for content analysis

    - by Cornel
    Hi, I need to inspect the content of a form before it is processed on the server. I am using squid proxy server with ecap adapter. In the adapter I can see the text typed in the text area but the content doesn't get to my jsp page when using an ecap adapter that allows me to take a look a the data coming in. I am using Squid proxy 3.1.10 with ecap adapter libecap v0.0.3 and Websphere App server 6.1. Anybody has some idea about what is causing this? Thank you, Cornel

    Read the article

  • How can one use online backup with large amounts of static data?

    - by Billy ONeal
    I'd like to setup an offsite backup solution for about 500GB of data that's currently stored between my various machines. I don't care about data retention rates, as this is only a backup of, not primary storage, for my data. If the backup is stored on crappy non-redundant systems, that does not matter. The data set is almost entirely static, and mostly consists of things like installers for Visual Studio, and installer disk images for all of my games. I have found two services which meet most of this: Mozy Carbonite However, both services impose low bandwidth caps, on the order of 50kb/s, which prevent me from backing up a dataset of this size effectively (somewhere on the order of 6 weeks), despite the fact that I get multiple MB/s upload speeds everywhere else from this location. Carbonite has the additional problem that it tries to ignore pretty much every file in my backup set by default, because the files are mostly iso files and vmdk files, which aren't backed up by default. There are other services such as EC2 which don't have such bandwidth caps, but such services are typically stored in highly redundant servers, and therefore cost on the order of 10 cents/gb/month, which is insanely expensive for storage of this kind of data set. (At $50/month I could build my own NAS to hold the data which would pay for itself after ~2-3 months) (To be fair, they're offering quite a bit more service than I'm looking for at that price, such as offering public HTTP access to the data) Does anything exist meeting those requirements or am I basically hosed?

    Read the article

  • How well does Scala Perform Comapred to Java?

    - by Teja Kantamneni
    The Question actually says it all. The reason behind this question is I am about to start a small side project and want to do it in Scala. I am learning scala for the past one month and now I am comfortable working with it. The scala compiler itself is pretty slow (unless you use fsc). So how well does it perform on JVM? I previously worked on groovy and I had seen sometimes over performed than java. My Question is how well scala perform on JVM compared to Java. I know scala has some very good features(FP, dynamic lang, statically typed...) but end of the day we need the performance...

    Read the article

  • Problems with real-valued input deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

  • Generating a Grid of NSTextField Objects From NSDictionary Items

    - by SteveStifler
    I'm trying to create an vocabulary study application using Obj-C and the Cocoa frameworks. I have about two week's experience in both areas and have reached an edge of my current knowledge. Here's where I'm stuck. When I press a checkbox, a corresponding plist is loaded into memory as an NSDictionary. I want to generate a "Label: Textfield" pair for each key:value pair, where the Label is the key. When the text typed into the Textfield matches the key's value, I want the Label's text to turn green. So how would I generate this grid, and once generated, how would I make the text green upon correct input? Thanks!

    Read the article

  • Prevent strings stored in memory from being read by other programs

    - by Roy
    Some programs like ProcessExplorer are able to read strings in memory (for example, my error message written in the code could be displayed easily, even though it is compiled already). Imagine if I have a password string "123456" allocated sequentially in memory. What if hackers are able to get hold of the password typed by the user? Is there anyway to prevent strings from being seen so clearly? Oh yes, also, if I hash the password and sent it from client to server to compare the stored database hash value, won't the hacker be able to store the same hash and replay it to gain access to the user account? Is there anyway to prevent replaying? Thank You!

    Read the article

  • Bypass using Alter Access Mappings for to Open Web from SPItemEventProperties

    - by Greg Ogle
    In the following code, // class overrides SPItemEventreceiver public override void ItemAdding(SPItemEventProperties properties) { using (var site = new SPSite(properties.SiteId)) //SiteId is integer { ... } } The following exception is thrown: System.IO.FileNotFoundException: The Web application at http://URL could not be found. Verify that you have typed the URL correctly. If the URL should be serving existing content, the system administrator may need to add a new request URL mapping to the intended application. One way to work around this is to hard-code (or configure) the URL specified in Alternate Access Mappings. Putting the correct URL in Alternate Access Mappings is ultimately the correct solution, but if possible, I need a work-around that doesn't require configuration.

    Read the article

  • Pivot Table grand total across columns

    - by Jon
    I'm using Excel 2010 and Power Pivot. I'm trying to calculate confidence and velocity for a development team. I'm extracting some information from our time and defect system each day and building a data set. What I need to do with Excel is do the calculations. So each day I add to my data set 1 row per task in the current project, estimate for that task and the time spent on that task. What I want to calculate is the estimate/actual for each task but also for each person. The trouble is that each day the actual is cumulative so I need to pick out the maximum value for each task. The estimate should remain unchanged. I can make this work at the task level with a calculated measure (=MAX(worked)/MAX(estimate)) but I don't know how to total this up for a person. I need the sum of the max worked for each task. So a dataset might look like: Name Task Estimate Worked N1 T1 3 1 N2 T2 3 1 N3 T3 4 1 N1 T1 3 2 N2 T4 5 1 N3 T3 4 2 N1 T5 1 2 N2 T6 2 3 N3 T7 3 2 What I want to see is for task T1 2 days were worked against an estimate of 3 days - so 2/3. For person N1 I want to see that they worked a total of 4 days against an estimate of 4 days so 4/4. For person N2 they worked 5 days for an estimate of 10 days. Any ideas on how I can achieve this?

    Read the article

  • squid bypass for a domain

    - by krisdigitx
    i am using squid with adzap, it possible that squid/adzap does not cache for a particluar domain eg. cnn.com this is my squid.conf file # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 #acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 #acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 192.168.1.0/24 acl localnet src 192.168.2.0/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port xxx.xxx.xxx.yyy:3128 transparent visible_hostname proxyserver.local # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir ufs /var/spool/squid 1024 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 access_log /var/log/squid/squid.log squid access_log syslog squid redirect_program /usr/local/adzap/scripts/wrapzap fixed using acl allow_domains dstdomain www.cnn.com always_direct allow allow_domains

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >