Search Results

Search found 22521 results on 901 pages for 'output redirect'.

Page 356/901 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Can a page opt out of IIS 7 compression?

    - by Glen Little
    My pages are automatically being compressed by IIS7 with GZIP. That is great... but, for one particular page, I need to stream it to the user, using Response.Flush() when needed. But when the output is being compressed, the IIS server seems to collect all my output until the page is done before compressing and sending it to the client. That nullifies my attempt to Flush the content out to the user. Is there a way that I can have this one page opt out of the compression? One possible option I've determined that if I manually set the content type to one that does not match the IIS configuration at c:\windows\system32\inetsrv\config\applicationhost.config, then IIS will not compress it. Eg. Response.ContentType = "x-text/html". This works okay with IE8, as it falls back to display the HTML. But Firefox will ask the user what to do with the unknown file type. This could work, if there was another Mime Type I could use that browsers would accept as HTML, that is not matched in the applicationhost.config. For reference, these are the mime types that will be compressed: <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> Others options? Are there other options to opt out of compression?

    Read the article

  • DNS: domain2 points to domain1

    - by Yar
    I have one domain ("domain1") that is set up with hosting and mail (hosted by Gmail Apps). This domain works perfectly. I want a second domain ("domain2") to forward to domain1, but I don't want to use "DNS Forwarding." I would like to have it act EXACTLY like domain1, so that domain2/whatever points to the same resource as domain1/whatever WITHOUT AN HTTP REDIRECT NOR BROWSER TRICKS LIKE FRAMES. I would also love to be able to send mail to "blah@domain2" and have it go to "blah@domain1". Can this be set up, and how? I am using GoDaddy as registrar and DNS host for both domains. GoDaddy is also the web host for domain1, and mail hosting is with Google Apps.

    Read the article

  • Meta refresh tag not working in (my) firefox?

    - by mplungjan
    Code like on this page does not work in (my) Firefox 3.6 and also not in Fx4 (WinXPsp3) Works in IE8, Safari 5, Opera 11, Mozilla 1.7, Chrome 9 <meta http-equiv=refresh content="12; URL=meta2.htm"> <meta http-equiv="refresh" content="1; URL=http://fully_qualified_url.com/page2.html"> are completely ignored Not that I use such back-button killing things, but a LOT of sites do, possibly including my linux apache it seems when it wants to show a 503 error page... If I firebug or look at generated content, I do not see the refresh tag changed in any way so I am really curious what kind of plugin/addon could block me which is why I googled (in vain) for a known bug... In about:config I have accessibility.blockautorefresh; false so that is not it. I ran in safe mode and OH MY GOD, STACKEXCHANGE IS FULL OF ADS but no redirect

    Read the article

  • How to properly translate the "var" result of a lambda expression to a concrete type?

    - by CrimsonX
    So I'm trying to learn more about lambda expressions. I read this question on stackoverflow, concurred with the chosen answer, and have attempted to implement the algorithm using a console app in C# using a simple LINQ expression. My question is: how do I translate the "var result" of the lambda expression into a usable object that I can then print the result? I would also appreciate an in-depth explanation of what is happening when I declare the outer => outer.Value.Frequency (I've read numerous explanations of lambda expressions but additional clarification would help) C# //Input : {5, 13, 6, 5, 13, 7, 8, 6, 5} //Output : {5, 5, 5, 13, 13, 6, 6, 7, 8} //The question is to arrange the numbers in the array in decreasing order of their frequency, preserving the order of their occurrence. //If there is a tie, like in this example between 13 and 6, then the number occurring first in the input array would come first in the output array. List<int> input = new List<int>(); input.Add(5); input.Add(13); input.Add(6); input.Add(5); input.Add(13); input.Add(7); input.Add(8); input.Add(6); input.Add(5); Dictionary<int, FrequencyAndValue> dictionary = new Dictionary<int, FrequencyAndValue>(); foreach (int number in input) { if (!dictionary.ContainsKey(number)) { dictionary.Add(number, new FrequencyAndValue(1, number) ); } else { dictionary[number].Frequency++; } } var result = dictionary.OrderByDescending(outer => outer.Value.Frequency); // How to translate the result into something I can print??

    Read the article

  • echo -e acts differently when run in a script by root on ubuntu

    - by ekrub
    When running a bash script on ubuntu 9.10, I get different behavior from bash echo's "-e" option depending on whether or not I'm running as root. Consider this script: $ cat echo-test if [ "`whoami`" = "root" ]; then echo "Running as root" fi echo Testing /bin/echo -e /bin/echo -e "foo\nbar" echo Testing bash echo -e echo -e "foo\nbar" When run as non-root user, I see this output: $ ./echo-test Testing /bin/echo -e foo bar Testing bash echo -e foo bar When run as root, I see this output: $ sudo ./echo-test Running as root Testing /bin/echo -e foo bar Testing bash echo -e -e foo bar Notice the "-e" being echoed in the last case ("-e foo" instead of "foo" on the second-to-last line). When running a script as root, the echo command runs as if "-e" was given and, if -e is given, the option itself is echoed. I can understand some subtle differences in behavior between /bin/echo and bash echo, but I would expect bash echo to behave the same no matter which user invokes it. Anyone know why this is the case? Is this a bug in bash echo? FYI -- I'm running GNU bash, version 4.0.33(1)-release (x86_64-pc-linux-gnu)

    Read the article

  • Subdomain only accessible from one computer

    - by Edan Maor
    I recently added a wildcard A record to my domain (*.root.com), mapping it to a certain elastic ip on AWS. I've configured apache to redirect all references to something.root.com to root.com, except for one specific "dev" subdomain, which is hosting its own site (a Django app, specifically). The Problem: This setup works perfectly for me on my computer. But on other computers around the office, it doesn't seem to work. Specifically, trying to visit dev.root.com gives an "unable to find server" error. Pinging dev.root.com gives a "cannot resolve hostname" error. The weird thing: pinging any other subdomain of root.com does work, from all machines. I would think this was all due to DNS propagation, except all the computers are behind the same office router, so how could that be the case? Any ideas?

    Read the article

  • A graph-based tuple merge?

    - by user1644030
    I have paired values in tuples that are related matches (and technically still in CSV files). Neither of the paired values are necessarily unique. tupleAB = (A####, B###), (A###, B###), (A###, B###)... tupleBC = (B####, C###), (B###, C###), (B###, C###)... tupleAC = (A####, C###), (A###, C###), (A###, C###)... My ideal output would be a dictionary with a unique ID and a list of "reinforced" matches. The way I try to think about it is in a graph-based context. For example, if: tupleAB[x] = (A0001, B0012) tupleBC[y] = (B0012, C0230) tupleAC[z] = (A0001, C0230) This would produce: output = {uniquekey0001, [A0001, B0012, C0230]} Ideally, this would also be able to scale up to more than three tuples (for example, adding a "D" match that would result in an additional three tuples - AD, BD, and CD - and lists of four items long; and so forth). In regards to scaling up to more tuples, I am open to having "graphs" that aren't necessarily fully connected, i.e., every node connected to every other node. My hunch is that I could easily filter based on the list lengths. I am open to any suggestions. I think, with a few cups of coffee, I could work out a brute force solution, but I thought I'd ask the community if anyone was aware of a more elegant solution. Thanks for any feedback.

    Read the article

  • Counting entries in a list of dictionaries: for loop vs. list comprehension with map(itemgetter)

    - by Dennis Williamson
    In a Python program I'm writing I've compared using a for loop and increment variables versus list comprehension with map(itemgetter) and len() when counting entries in dictionaries which are in a list. It takes the same time using a each method. Am I doing something wrong or is there a better approach? Here is a greatly simplified and shortened data structure: list = [ {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'biscuits and gravy'}, {'key1': False, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'peaches and cream'}, {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': False, 'filenotfound': 'Abbott and Costello'}, {'key1': False, 'dontcare': False, 'ignoreme': True, 'key2': False, 'filenotfound': 'over and under'}, {'key1': True, 'dontcare': True, 'ignoreme': False, 'key2': True, 'filenotfound': 'Scotch and... well... neat, thanks'} ] Here is the for loop version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True key1 = key2 = 0 for dictionary in list: if dictionary["key1"]: key1 += 1 if dictionary["key2"]: key2 += 1 print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above: Counts: key1: 3, subset key2: 2 Here is the other, perhaps more Pythonic, version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True from operator import itemgetter KEY1 = 0 KEY2 = 1 getentries = itemgetter("key1", "key2") entries = map(getentries, list) key1 = len([x for x in entries if x[KEY1]]) key2 = len([x for x in entries if x[KEY1] and x[KEY2]]) print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above (same as before): Counts: key1: 3, subset key2: 2 I'm a tiny bit surprised these take the same amount of time. I wonder if there's something faster. I'm sure I'm overlooking something simple. One alternative I've considered is loading the data into a database and doing SQL queries, but the data doesn't need to persist and I'd have to profile the overhead of the data transfer, etc., and a database may not always be available. I have no control over the original form of the data. The code above is not going for style points.

    Read the article

  • Using Ant how can I dex a directory of jars?

    - by cbeaudin
    I have a directory full of jars (felix bundles). I want to iterate through all of these jars and create dex'd versions. My intent is to deploy each of these dex'd jars as standalone apk's since they are bundles. Feel free to straighten me out if I am approaching this from the wrong direction. This first part is just to try and create a corresponding .dex file for each jar. However when I run this I am getting a "no resources specified" error coming out of Ant. Is this the right approach, or is there a simpler approach to just input a jar and output a dex'd version of that jar? The ${file} is valid as it is spitting out the name of the file in the echo command. <target name="dexBundles" description="Run dex on all the bundles"> <taskdef resource="net/sf/antcontrib/antlib.xml" classpath="${basedir}/libs/ant-contrib.jar" /> <echo>Starting</echo> <for param="file"> <path> <fileset dir="${pre.dex.dir}"> <include name="**/*.jar" /> </fileset> </path> <sequential> <echo message="@{file}" /> <echo>Converting jar file @{file} into ${post.dex.dir}/@{file}.class...</echo> <apply executable="${dx}" failonerror="true" parallel="true" verbose="true"> <arg value="--dex" /> <arg value="--output=${post.dex.dir}/${file}.dex" /> <arg path="@{file}" /> </apply> </sequential> </for> <echo>Finished</echo> </target>

    Read the article

  • CGI Buffering issue

    - by Punit
    I have a server side C based CGI code as: cgiFormFileSize("UPDATEFILE", &size); //UPDATEFILE = file being uploaded cgiFormFileName("UPDATEFILE", file_name, 1024); cgiFormFileContentType("UPDATEFILE", mime_type, 1024); buffer = malloc(sizeof(char) * size); if (cgiFormFileOpen("UPDATEFILE", &file) != cgiFormSuccess) { exit(1); } output = fopen("/tmp/cgi.tar.gz", "w+"); printf("The size of file is: %d bytes", size); inc = size/(1024*100); while (cgiFormFileRead(file, b, sizeof(b), &got_count) == cgiFormSuccess) { fwrite(b,sizeof(char),got_count,output); i++; if(i == inc && j<=100) { ***inc_pb*** = j; i = 0; j++; // j is the progress bar increment value } } cgiFormFileClose(file); retval = system("mkdir /tmp/update-tmp;\ cd /tmp/update-tmp;\ tar -xzf ../cgi.tar.gz;\ bash -c /tmp/update-tmp/update.sh"); However, this doesn't work the way as is seen above. Instead of printing 1,2,...100 to progress_bar.txt one by one it prints at ONE GO, seems it buffers and then writes to the file. fflush() also didn't work. Any clue/suggestion would be really appreciated.

    Read the article

  • pure/const functions in C++

    - by Albert
    Hi, I'm thinking of using pure/const functions more heavily in my C++ code. (pure/const attribute in GCC) However, I am curious how strict I should be about it and what could possibly break. The most obvious case are debug outputs (in whatever form, could be on cout, in some file or in some custom debug class). I probably will have a lot of functions, which don't have any side effects despite this sort of debug output. No matter if the debug output is made or not, this will absolutely have no effect on the rest of my application. Or another case I'm thinking of is the use of my own SmartPointer class. In debug mode, my SmartPointer class has some global register where it does some extra checks. If I use such an object in a pure/const function, it does have some slight side effects (in the sense that some memory probably will be different) which should not have any real side effects though (in the sense that the behaviour is in any way different). Similar also for mutexes and other stuff. I can think of many complex cases where it has some side effects (in the sense of that some memory will be different, maybe even some threads are created, some filesystem manipulation is made, etc) but has no computational difference (all those side effects could very well be left out and I would even prefer that). How does it work out in practice? If I mark such functions as pure/const, could it break anything (considering that the code is all correct)?

    Read the article

  • Port Redirection on Mac OS X Lion

    - by Andreas
    I have tried to solve this issue using pf but with no luck. Basically, I am trying to redirect incoming port 443 traffic to port 22. I have tried to set up a rule in a file and load it in pf but I get syntax error. Can anyone with more experience with pf provide some insight? Here's what I've attempted: pass in on en1 proto tcp from any to any port 443 rdr-to 127.0.0.1 port 22 and pass in quick proto tcp to port 443 rdr-to 127.0.0.1 port 22 I've been able to do this in MacOSX Snow Leopard with ipfw: sudo ipfw add 1443 forward 127.0.0.1,22 ip from any to any 443 in but it doesn't work in Lion (it gives me an Invalid Argument error).

    Read the article

  • CRONTAB doesn't finish svndump

    - by Andrew
    I just discovered that the automated dumps I've been creating of my SVN repository have been getting cut off early and basically only half the dump is there. It's not an emergency, but I hate being in this situation. It defeats the purpose of making automated backups in the first place. The command I'm using is below. If I execute it manually in the terminal, it completes fine; the output.txt file is 16 megs in size with all 335 revisions. But if I leave it to crontab, it bails at the halfway mark, at around 8.1 megs and only the first 169 revisions. # m h dom mon dow command 18 00 * * * svnadmin dump /var/svn/repos/myproject > /home/andrew/output.txt I actually save to a dated gzipped file, and there's no shortage of space on the server, so this is not a disk space issue. It seems to bail after two seconds, so this could be a time issue, but the file size is the same every single time for the past month, so I don't think it's that either. Does crontab execute within a limited memory space?

    Read the article

  • select failing with C program but not shell

    - by Gary
    So I have a parent and child process, and the parent can read output from the child and send to the input of the child. So far, everything has been working fine with shell scripts, testing commands which input and output data. I just tested with a simple C program and couldn't get it to work. Here's the C program: #include <stdio.h> int main( void ) { char stuff[80]; printf("Enter some stuff:\n"); scanf("%s", stuff); return 0; } The problem with with the C program is that my select fails to read from the child fd and hence the program cannot finish. Here's the bit that does the select.. //wait till child is ready fd_set set; struct timeval timeout; FD_ZERO( &set ); // initialize fd set FD_SET( PARENT_READ, &set ); // add child in to set timeout.tv_sec = 3; timeout.tv_usec = 0; int r = select(FD_SETSIZE, &set, NULL, NULL, &timeout); if( r < 1 ) { // we didn't get any input exit(1); } Does anyone have any idea why this would happen with the C program and not a shell one? Thanks!

    Read the article

  • What does the windbg command "kd" do?

    - by Oskar
    I ran kd by mistake and got some output that inteerested me, a reference to a line of code in my module that I can't see on the call stack of any thread. The lines weren't the beginnning of the method so I don't think the reference is to a function pointer, but possibly the result of an exception being stored in memory??? Of course, that happens to be what I'm looking for... Update: The stack trace of the exception is: 0:000> kb *** Stack trace for last set context - .thread/.cxr resets it ChildEBP RetAddr Args to Child 0174f168 734ea84f 2cb9e950 00000000 2cb9e950 kernel32!LoadTimeZoneInformation+0x2b 0174f1c4 734ead92 00000022 00000001 000685d0 msvbvm60! RUN_INSTMGR::ExecuteInitTerm+0x178 0174f1f8 734ea9ee 00000000 0000002f 2dbc2abc msvbvm60! RUN_INSTMGR::CreateObjInstanceWithParts+0x1e4 0174f278 7350414e 2cb9e96c 00000000 0174f2f0 msvbvm60! RUN_INSTMGR::CreateObjInstance+0x14d 0174f2e4 734fa071 00000000 2cb9e96c 0174f2fc msvbvm60!RcmConstructObjectInstance+0x75 0174f31c 00976ef1 2cb9e950 00591bc0 0174fddc msvbvm60!__vbaNew+0x21 and into our code (create a new Form derived class) the dds output: 0:000> dds esp-0x40 esp+0x100 0174f05c 00000000 0174f060 00000000 0174f064 00000000 0174f068 00000000 0174f06c 00000000 0174f070 00000000 0174f074 00000000 0174f078 00000000 0174f07c 00000000 0174f080 00000000 0174f084 00000000 0174f088 00000000 0174f08c 00000000 0174f090 00000000 0174f094 00000000 0174f098 00000000 0174f09c 007f4f9b ourDll!formDerivedClass::Form_Initialize+0x10b [C:\Buildbox\formDerivedClass.frm @ 1452] etc which seems to indicate that Initialize is being called even though it isn't on the stack trace of either this exception or any of the threads. As suggested, it might all be a mismatch between pdbs and dlls, but it seems a coincidence that we end up in the right classes and methods

    Read the article

  • getting attribute as column headers

    - by edwards
    I have the following XML: <DEVICEMESSAGES> <VERSION xml="1" checksum="" revision="0" envision="33050000" device="" /> <HEADER id1="0001" id2="0001" content="Nasher[&lt;messageid&gt;]: &lt;!payload&gt;" /> <MESSAGE level="7" parse="1" parsedefvalue="1" tableid="15" id1="24682" id2="24682" eventcategory="1003010000" content="Access to &lt;webpage&gt; was blocked due to its category (&lt;info&gt; by &lt;hostname&gt;)" /> </DEVICEMESSAGES> I am using the following XSLT: <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="text"/> <xsl:strip-space elements="*"/> <xsl:template match="DEVICEMESSAGES/HEADERS"> <xsl:value-of select="@id2"/>,<xsl:text/> <xsl:value-of select="@content"/>,<xsl:text/> <xsl:text>&#xa;</xsl:text> </xsl:template> </xsl:stylesheet> I get the following output: 0001 , Nasher[<messageid>]: <!payload> whereas I need the column headings, too: id2, content 0001 , Nasher[<messageid>]: <!payload>

    Read the article

  • xsl transform: problem with Ampersand URL parameters

    - by Rac123
    I'm having issues with transforming XSL with parameters in a URL. I'm at a point that I can't change the C# code anymore, only can make changes to xsl file. C# code: string xml = "<APPLDATA><APPID>1052391</APPID></APPLDATA>"; XmlDocument oXml = new XmlDocument(); oXml.LoadXml(xml); XslTransform oXslTransform = new XslTransform(); oXslTransform.Load(@"C:\Projects\Win\ConsoleApps\XslTransformTest\S15033.xsl"); StringWriter oOutput = new StringWriter(); oXslTransform.Transform(oXml, null, oOutput) XSL Code: <body> <xsl:variable name="app"> <xsl:value-of select="normalize-space(APPLDATA/APPID)" /> </xsl:variable> <div id="homeImage" > <xsl:attribute name="style"> background-image:url("https://server/image.gif?a=10&amp;Id='<xsl:value-of disable-output-escaping="yes" select="$app" />'") </xsl:attribute> </div> </body> </html> URL transformed: https://server/image.gif?a=10&Id='1052391' URL Expected: https://server/image.gif?a=10&Id='1052391' How do I fix this? The output (oOutput.ToString()) is being used in an email template so it's taking the URL transformed literally. When you click on this request (with the correct server name of course), the 403 (Access forbidden) error is being thrown.

    Read the article

  • How to authorize standard users to install drivers on Windows XP

    - by Dr I
    I'm currently looking for a way to autorize my non administrators users to perform an installation of drivers. Here is the speech: All my users are standard users, they got a VirtualBox Hypervisor if they need the administrator rights. But if they put an USB device on the local machine and try to redirect the device to the Virtual Machine, Windows ask for some Administrator rights. I've try to set up those GPO: Allow standard users to install drivers. Install WHQL Drivers: Allow Silently. I don't know how to do this.

    Read the article

  • 32/64 bit problems with Eclipse CDT on Ubuntu

    - by waffleShirt
    I have just recently started running Linux on my PC and I am trying to start learning OpenGL. I am using the latest version of Eclipse CDT as my IDE, and my system is Ubuntu 10.10, 64 bit version. The problem I am having is that whenever I try to run a build from within the IDE I get the error message "Launch Failed. Binary Not Found." Ive done a lot of looking around on the internet but I still cant solve the problem. I know for a fact that the binary is built, it can be run from a terminal window. According to posts I have seen the problem is that Eclipse tries to run a 32 bit binary, but GCC 4.4.5 defaults to 64 bit binaries on a 64 bit system. Ive seen a lot of information about using the -m32 flag in makefiles, but then I still get the following output in Eclipse: make all g++ -o HelloWorld2 main.o /usr/bin/ld: i386 architecture of input file `main.o' is incompatible with i386:x86-64 output /usr/bin/ld: final link failed: Invalid operation collect2: ld returned 1 exit status make: *** [HelloWorld2] Error 1 What I would like to know is how to either get Eclipse to launch the 64 bit binaries, or have Eclipse correctly compile 32 bit binaries.

    Read the article

  • Why do the outputs differ when I run this code using Netbeans 6.8 and Eclipse?

    - by Vimal Basdeo
    When I am running the following codes using Eclipse and Netbeans 6.8. I want to see the available COM ports on my computer. When running in Eclipse it is returning me all available COm ports but when running t in Netbeans, it does not seem to find any ports .. public static void test(){ Enumeration lists=CommPortIdentifier.getPortIdentifiers(); System.out.println(lists.hasMoreElements()); while (lists.hasMoreElements()){ CommPortIdentifier cn=(CommPortIdentifier)lists.nextElement(); if ((CommPortIdentifier.PORT_SERIAL==cn.getPortType())){ System.out.println("Name is serail portzzzz "+cn.getName()+" Owned status "+cn.isCurrentlyOwned()); try{ SerialPort port1=(SerialPort)cn.open("ComControl",800000); port1.setSerialPortParams(9600, SerialPort.DATABITS_8, SerialPort.STOPBITS_1, SerialPort.PARITY_NONE); System.out.println("Before get stream"); OutputStream out=port1.getOutputStream(); InputStream input=port1.getInputStream(); System.out.println("Before write"); out.write("AT".getBytes()); System.out.println("After write"); int sample=0; //while((( sample=input.read())!=-1)){ System.out.println("Before read"); //System.out.println(input.read() + "TEsting "); //} System.out.println("After read"); System.out.println("Receive timeout is "+port1.getReceiveTimeout()); }catch(Exception e){ System.err.println(e.getMessage()); } } else{ System.out.println("Name is parallel portzzzz "+cn.getName()+" Owned status "+cn.isCurrentlyOwned()+cn.getPortType()+" "); } } } Output with Netbeans false Output using Eclipse true Name is serail portzzzz COM1 Owned status false Before get stream Before write After write Before read After read Receive timeout is -1 Name is serail portzzzz COM2 Owned status false Before get stream Before write After write Before read After read Receive timeout is -1 Name is parallel portzzzz LPT1 Owned status false2 Name is parallel portzzzz LPT2 Owned status false2

    Read the article

  • C Programming: malloc() inside another function

    - by vikramtheone
    Hi Guys, I need help with malloc() inside another function. I'm passing a pointer and size to the function from my main() and I would like to allocate memory for that pointer dynamically using malloc() from inside that called function, but what I see is that.... the memory which is getting allocated is for the pointer declared withing my called function and not for the pointer which is inside the main(). How should I pass a pointer to a function and allocate memory for the passed pointer from inside the called function? Can anyone throw light on this? Help!!! Vikram I have written the following code and I get the output as shown below SOURCE: main() { unsigned char *input_image; unsigned int bmp_image_size = 262144; if(alloc_pixels(input_image, bmp_image_size)==NULL) printf("\nPoint2: Memory allocated: %d bytes",_msize(input_image)); else printf("\nPoint3: Memory not allocated"); } signed char alloc_pixels(unsigned char *ptr, unsigned int size) { signed char status = NO_ERROR; ptr = NULL; ptr = (unsigned char*)malloc(size); if(ptr== NULL) { status = ERROR; free(ptr); printf("\nERROR: Memory allocation did not complete successfully!"); } printf("\nPoint1: Memory allocated: %d bytes",_msize(ptr)); return status; } PROGRAM OUTPUT: Point1: Memory allocated ptr: 262144 bytes Point2: Memory allocated input_image: 0 bytes

    Read the article

  • Normal pointer vs Auto pointer (std::auto_ptr)

    - by AKN
    Code snippet (normal pointer) int *pi = new int; int i = 90; pi = &i; int k = *pi + 10; cout<<k<<endl; delete pi; [Output: 100] Code snippet (auto pointer) Case 1: std::auto_ptr<int> pi(new int); int i = 90; pi = &i; int k = *pi + 10; //Throws unhandled exception error at this point while debugging. cout<<k<<endl; //delete pi; (It deletes by itself when goes out of scope. So explicit 'delete' call not required) Case 2: std::auto_ptr<int> pi(new int); int i = 90; *pi = 90; int k = *pi + 10; cout<<k<<endl; [Output: 100] Can someone please tell why it failed to work for case 1?

    Read the article

  • Unique prime factors using HashSet

    - by theGreenCabbage
    I wrote a method that recursively finds prime factors. Originally, the method simply printed values. I am currently trying to add them to a HashSet to find the unique prime factors. In each of my original print statements, I added a primes.add() in order to add that particular integer into my set. My printed output remains the same, for example, if I put in the integer 24, I get 2*2*2*3. However, as soon as I print the HashSet, the output is simply [2]. public static Set<Integer> primeFactors(int n) { Set<Integer> primes = new HashSet<Integer>(); if(n <= 1) { System.out.print(n); primes.add(n); } else { for(int factor = 2; factor <= n; factor++) { if(n % factor == 0) { System.out.print(factor); primes.add(factor); if(factor < n) { System.out.print('*'); primeFactors(n/factor); } return primes; } } } return primes; } I have tried debugging via putting print statements around every line, but was unable to figure out why my .add() was not adding some values into my HashSet.

    Read the article

  • Why does PsExec hang after successfully running a powershell script?

    - by Matt
    The script is fairly straight forward. Simply tries to start a bunch of windows services. Execution locally works fine when on the target machine. The script is actually executing fine as well when done via PsExec, it just never returns until I hit the "enter" key on my CMD prompt. This is a problem, because this is being called from TeamCity, and it makes the Agent hang waiting for PsExec to return. I've tried the following: Adding an exit and exit 0 at the end of the Powershell script Adding a < NUL to the end of the PsExec call, per the answer in this SF question Adding a > stdout redirect This is how I am actually calling psexec: psexec \\target -u domain\username -p password powershell c:\path\script.ps1 No matter what I do, it hangs until I the locally on the cmd prompt. After I hit enter, I get the message: powershell exited on target with error code 0.

    Read the article

  • OpenVPN Configuration - Windows 7 client & debian server

    - by Guillaume
    I recently formatted my Windows 7 computer and lost my client's config files for OpenVPN. I recovered the certificates and default config that were left on the server but I haven't managed to make the whole thing work again. I assume the server's config and routing table are OK because it was working before (although quite some time ago). Would any of you experts be able to help? server.conf # Serveur TCP/666 mode server proto udp port 666 dev tun # Cles et certificats ca ca.crt cert server.crt key server.key dh dh1024.pem tls-auth ta.key 0 cipher AES-256-CBC # Reseau server 10.8.0.0 255.255.255.0 #push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 208.67.222.222" push "dhcp-option DNS 208.67.220.220" push "redirect-gateway def1" keepalive 10 120 # Securite user nobody group nogroup chroot /etc/openvpn/jail persist-key persist-tun comp-lzo # Log verb 3 mute 20 status openvpn-status.log log-append /var/log/openvpn.log client.conf # Client client dev tun proto udp remote *my server's ip address*:666 cipher AES-256-CBC # Cles ca ca.crt cert client1.crt key client1.key tls-auth ta.key 1 # Securite nobind persist-key persist-tun comp-lzo verb 3 Routing table on debian server when OpenVPN server is running: Destination Gateway Genmask Indic Metric Ref Use Iface 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 my server's ip * 255.255.255.0 U 0 0 0 eth0 default 72815.trg.dedic 0.0.0.0 UG 0 0 0 eth0 Routing table on Windows 7 client (OpenVPN not working) =========================================================================== Interface List 19...00 f0 8a 1b 6e 5c ......TAP-Win32 Adapter V9 12...90 2e 34 33 84 7b ......Atheros AR8151 PCI-E Gigabit Ethernet Controller ( NDIS 6.20) 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 13...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.11 20 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.1.0 255.255.255.0 On-link 192.168.1.11 276 192.168.1.11 255.255.255.255 On-link 192.168.1.11 276 192.168.1.255 255.255.255.255 On-link 192.168.1.11 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.11 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.11 276 =========================================================================== Persistent Routes: None IPv6 Route Table =========================================================================== Active Routes: [...] =========================================================================== Persistent Routes: None And when the link is established between my client and the server: The server's routing table stays the same. The client's becomes: =========================================================================== Interface List 19...00 f0 8a 1b 6e 5c ......TAP-Win32 Adapter V9 12...90 2e 34 33 84 7b ......Atheros AR8151 PCI-E Gigabit Ethernet Controller ( NDIS 6.20) 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 13...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.11 20 0.0.0.0 128.0.0.0 10.8.0.5 10.8.0.6 30 10.8.0.1 255.255.255.255 10.8.0.5 10.8.0.6 30 10.8.0.4 255.255.255.252 On-link 10.8.0.6 286 10.8.0.6 255.255.255.255 On-link 10.8.0.6 286 10.8.0.7 255.255.255.255 On-link 10.8.0.6 286 my server's ip 255.255.255.255 192.168.1.1 192.168.1.11 20 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.0.0.0 128.0.0.0 10.8.0.5 10.8.0.6 30 192.168.1.0 255.255.255.0 On-link 192.168.1.11 276 192.168.1.11 255.255.255.255 On-link 192.168.1.11 276 192.168.1.255 255.255.255.255 On-link 192.168.1.11 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.11 276 224.0.0.0 240.0.0.0 On-link 10.8.0.6 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.11 276 255.255.255.255 255.255.255.255 On-link 10.8.0.6 286 =========================================================================== Persistent Routes: None What's working: Server and client do connect to each other, SSL certificates are OK. The client gets an IP (10.8.0.6) from the server OpenVPN client is started as an administrator. But: I cannot ping the other one on either side. 'Gateway' value is empty on client's side (in the adapter's "status" window). Client has got no internet access when the link is up. Ideal configuration: I only want the client to be able to use the server's Internet access and access its resources (MySQL server in particular). I do not need or want the server to access the client's local network. The client needs to be able to access it's local network, although all Internet traffic should be redirected to the VPN link. I spent a considerable amount of time on this but it's still not working, any help would be much appreciated. Thanks :)

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >