Search Results

Search found 904 results on 37 pages for '4096'.

Page 29/37 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Error number 13 - Remote access svn with dav_svn failing

    - by C. Ross
    I'm getting the following error on my svn repository <D:error> <C:error/> <m:human-readable errcode="13"> Could not open the requested SVN filesystem </m:human-readable> </D:error> I've followed the instructions from the How to Geek, and the Ubuntu Community Page, but to no success. I've even given the repository 777 permissions. <Location /svn/myProject > # Uncomment this to enable the repository DAV svn # Set this to the path to your repository SVNPath /svn/myProject # Comments # Comments # Comments AuthType Basic AuthName "My Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd # More Comments </Location> The permissions follow: drwxrwsrwx 6 www-data webdev 4096 2010-02-11 22:02 /svn/myProject And svnadmin validates the directory $svnadmin verify /svn/myProject/ * Verified revision 0. and I'm accessing the repository at http://ipAddress/svn/myProject Edit: The apache error log says [Fri Feb 12 13:55:59 2010] [error] [client <ip>] (20014)Internal error: Can't open file '/svn/myProject/format': Permission denied [Fri Feb 12 13:55:59 2010] [error] [client <ip>] Could not fetch resource information. [500, #0] [Fri Feb 12 13:55:59 2010] [error] [client <ip>] Could not open the requested SVN filesystem [500, #13] [Fri Feb 12 13:55:59 2010] [error] [client <ip>] Could not open the requested SVN filesystem [500, #13] Even though I confirmed that this file is ugo readable and writable. What am I doing wrong?

    Read the article

  • What is the WCF equivalent?

    - by klausbyskov
    I am trying to port some code that is based on WSE3.0 to WCF. Basically, the old code has the following configuration: <microsoft.web.services3> <diagnostics> <trace enabled="true" input="InputTrace.webinfo" output="OutputTrace.webinfo" /> </diagnostics> <tokenIssuer> <statefulSecurityContextToken enabled="false" /> </tokenIssuer> </microsoft.web.services3> When calling the same service through my "Service Reference" I get this error: Request does not contain required Security header My binding looks like this: <basicHttpBinding> <binding name="LegalUnitGetBinding" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="Transport"> </security> </binding> </basicHttpBinding> From what I have understood, the service I'm calling only requires an SSL connection, since it receives a username and password as part of a request parameter. Any help or suggestions would be greatly appreciated.

    Read the article

  • Help with SDL_mixer (no sound)

    - by Kaizoku
    Hello, I have this strange problem with SDL_mixer, it doesn't want to play music. It doesn't throw any error, it just skips it. Any advice? I am compiling on linux with libvorbis. audio.h #ifndef AUDIO_H #define AUDIO_H #include <string> #include <SDL/SDL_mixer.h> class Audio { private: Mix_Music *music; public: Audio(); virtual ~Audio(); public: void setMusic(std::string path); void playMusic(); }; #endif /* AUDIO_H */ audio.cpp #include "Audio.h" #include <stdexcept> Audio::Audio() { if (0 == Mix_Init(MIX_INIT_OGG)) throw std::runtime_error(Mix_GetError()); if (-1 == Mix_OpenAudio(44100, MIX_DEFAULT_FORMAT, MIX_DEFAULT_CHANNELS, 4096)) throw std::runtime_error(Mix_GetError()); } Audio::~Audio() { Mix_FreeMusic(music); Mix_Quit(); } void Audio::setMusic(std::string path) { music = Mix_LoadMUS(path.c_str()); if (NULL == music) throw std::runtime_error(Mix_GetError()); } void Audio::playMusic() { if (NULL != music) { if (-1 == Mix_PlayMusic(music, -1)) throw std::runtime_error(Mix_GetError()); } }

    Read the article

  • vsts load test datasource issues

    - by ashish.s
    Hello, I have a simple test using vsts load test that is using datasource. The connection string for the source is as follows <connectionStrings> <add name="MyExcelConn" connectionString="Driver={Microsoft Excel Driver (*.xls)};Dsn=Excel Files;dbq=loginusers.xls;defaultdir=.;driverid=790;maxbuffersize=4096;pagetimeout=20;ReadOnly=False" providerName="System.Data.Odbc" /> </connectionStrings> the datasource configuration is as follows and i am getting following error estError TestError 1,000 The unit test adapter failed to connect to the data source or to read the data. For more information on troubleshooting this error, see "Troubleshooting Data-Driven Unit Tests" (http://go.microsoft.com/fwlink/?LinkId=62412) in the MSDN Library. Error details: ERROR [42000] [Microsoft][ODBC Excel Driver] Cannot update. Database or object is read-only. ERROR [IM006] [Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed ERROR [42000] [Microsoft][ODBC Excel Driver] Cannot update. Database or object is read-only. I wrote a test, just to check if i could create an odbc connection would work and that works the test is as follows [TestMethod] public void TestExcelFile() { string connString = ConfigurationManager.ConnectionStrings["MyExcelConn"].ConnectionString; using (OdbcConnection con = new OdbcConnection(connString)) { con.Open(); System.Data.Odbc.OdbcCommand objCmd = new OdbcCommand("SELECT * FROM [loginusers$]"); objCmd.Connection = con; OdbcDataAdapter adapter = new OdbcDataAdapter(objCmd); DataSet ds = new DataSet(); adapter.Fill(ds); Assert.IsTrue(ds.Tables[0].Rows.Count > 1); } } any ideas ?

    Read the article

  • script to find "deny" ACE in ACLs, and remove it

    - by Tom
    On my 100TB cluster, I need to find dirs and files that have a "deny" ACE within their ACL, then remove that ACE on each instance. I'm using the following: # find . -print0 | xargs -0 ls -led | grep deny -B4 and get this output (partial, for example only) -r--rw---- 1 chris GroupOne 4096 Mar 6 18:12 ./directoryA/fileX.txt OWNER: user:chris GROUP: group:GroupOne 0: user:chris allow file_gen_read,std_write_dac,file_write_attr 1: user:chris deny file_write,append,file_write_ext_attr,execute -- -r--rwxrwx 1 chris GroupOne 14728221 Mar 6 18:12 ./directoryA/subdirA/fileZ.txt OWNER: user:chris GROUP: group:GroupOne 0: user:chris allow file_gen_read,std_write_dac,file_write_attr 1: user:chris deny file_write,append,file_write_ext_attr,execute -- OWNER: user:bob GROUP: group:GroupTwo 0: user:bob allow dir_gen_read,dir_gen_write,dir_gen_execute,std_write_dac,delete_child,object_inherit,container_inherit 1: group:GroupTwo allow std_read_dac,std_write_dac,std_synchronize,dir_read_attr,dir_write_attr,object_inherit,container_inherit 2: group:GroupTwo deny list,add_file,add_subdir,dir_read_ext_attr,dir_write_ext_attr,traverse,delete_child,object_inherit,container_inherit -- As you can see, depending on where the "deny" ACE is, I can see/not-see the path. I could increase the -B value (I've seen up to 8 ACEs on a file) but then I would get more output to distill from... What I need to do next is extract $ACENUMBER and $PATHTOFILE so that I can execute this command: chmod -a# $ACENUMBER $PATHTOFILE Additional issue is that the find command (above) gives a relative path, whereas I need the full path. I guess that would need to be edited somehow. Any guidance on how to accomplish this?

    Read the article

  • Using stdint.h and ANSI printf?

    - by nn
    Hi, I'm writing a bignum library, and I want to use efficient data types to represent the digits. Particularly integer for the digit, and long (if strictly double the size of the integer) for intermediate representations when adding and multiplying. I will be using some C99 functionality, but trying to conform to ANSI C. Currently I have the following in my bignum library: #include <stdint.h> #if defined(__LP64__) || defined(__amd64) || defined(__x86_64) || defined(__amd64__) || defined(__amd64__) || defined(_LP64) typedef uint64_t u_w; typedef uint32_t u_hw; #define BIGNUM_DIGITS 2048 #define U_HW_BITS 16 #define U_W_BITS 32 #define U_HW_MAX UINT32_MAX #define U_HW_MIN UINT32_MIN #define U_W_MAX UINT64_MAX #define U_W_MIN UINT64_MIN #else typedef uint32_t u_w; typedef uint16_t u_hw; #define BIGNUM_DIGITS 4096 #define U_HW_BITS 16 #define U_W_BITS 32 #define U_HW_MAX UINT16_MAX #define U_HW_MIN UINT16_MIN #define U_W_MAX UINT32_MAX #define U_W_MIN UINT32_MIN #endif typedef struct bn { int sign; int n_digits; // #digits should exclude carry (digits = limbs) int carry; u_hw tab[BIGNUM_DIGITS]; } bn; As I haven't written a procedure to write the bignum in decimal, I have to analyze the intermediate array and printf the values of each digit. However I don't know which conversion specifier to use with printf. Preferably I would like to write to the terminal the digit encoded in hexadecimal. The underlying issue is, that I want two data types, one that is twice as long as the other, and further use them with printf using standard conversion specifiers. It would be ideal if int is 32bits and long is 64bits but I don't know how to guarantee this using a preprocessor, and when it comes time to use functions such as printf that solely rely on the standard types I no longer know what to use.

    Read the article

  • Getting all new messages from a Maildir in python

    - by Jesper
    I have a mail dir: foo@foo:~/Maildir$ ls -l total 288 drwx------ 2 foo foo 155648 2010-04-19 15:19 cur -rw------- 1 foo foo 440 2010-03-20 08:50 dovecot.index.log -rw------- 1 foo foo 112 2010-03-20 08:49 dovecot-uidlist -rw------- 1 foo foo 8 2010-03-20 08:49 dovecot-uidvalidity -rw------- 1 foo foo 0 2010-03-20 08:49 dovecot-uidvalidity.4ba48c0e drwx------ 2 foo foo 114688 2010-04-19 16:07 new drwx------ 2 foo foo 4096 2010-04-19 16:07 tmp And in python I'm trying to get all new messages (Python 2.6.5rc2). First, getting "Maildir" works: >>> import mailbox >>> md = mailbox.Maildir('/home/foo/Maildir') >>> md.iterkeys().next() '1269924477.Vfc01I4249fM708004.foo' But how do I access "Maildir/new"? This does not work: >>> md = mailbox.Maildir('/home/foo/Maildir/new') >>> md.iterkeys().next() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.6/mailbox.py", line 346, in iterkeys self._refresh() File "/usr/lib/python2.6/mailbox.py", line 467, in _refresh for entry in os.listdir(subdir_path): OSError: [Errno 2] No such file or directory: '/home/foo/Maildir/new/new' >>> Any ideas?

    Read the article

  • Self - hosted WCF server and SSL

    - by jitm
    Hello, There is self - hosted WCF server (Not IIS), and was generated certificates (on the Win Xp) using command line like makecert.exe -sr CurrentUser -ss My -a sha1 -n CN=SecureClient -sky exchange -pe makecert.exe -sr CurrentUser -ss My -a sha1 -n CN=SecureServer -sky exchange -pe These certificates was added to the server code like this: serviceCred.ServiceCertificate.SetCertificate(StoreLocation.LocalMachine, StoreName.My, X509FindType.FindBySubjectName, "SecureServer"); serviceCred.ClientCertificate.SetCertificate(StoreLocation.LocalMachine, StoreName.My, X509FindType.FindBySubjectName, "SecureClient"); After all previous operation I created simple client to check SSL connection to the server. Client configuration: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IAdminContract" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Basic"/> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="https://myhost:8002/Admin" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IAdminContract" contract="Admin.IAdminContract" name="BasicHttpBinding_IAdminContract" /> </client> </system.serviceModel> </configuration> Code: Admin.AdminContractClient client = new AdminContractClient("BasicHttpBinding_IAdminContract"); client.ClientCredentials.UserName.UserName = "user"; client.ClientCredentials.UserName.Password = "pass"; var result = client.ExecuteMethod() During execution I receiving next error: The provided URI scheme 'https' is invalid; expected 'http'.\r\nParameter name: via Question: How to enable ssl for self-hosted server and where should I set - up certificates for client and server ? Thanks.

    Read the article

  • How to make new file permission inherit from the parent directory?

    - by Wai Yip Tung
    I have a directory called data. Then I am running a script under the user id 'robot'. robot writes to the data directory and update files inside. The idea is data is open for both me and robot to update. So I setup the permission and owner group like this drwxrwxr-x 2 me robot-grp 4096 Jun 11 20:50 data where both me and robot belongs to the 'robot-grp'. I change the permission and the owner group recursively like the parent directory. I regularly upload new files into the data directory using rsync. Unfortunately, new files uploaded does not inherit the parent directory's permission as I hope. Instead it looks like this -rw-r--r-- 1 me users 6 Jun 11 20:50 new-file.txt When robot tries to update new-file.txt, it fails due to lack of file permission. I'm not sure if setting umask helps. In anycase the new files does not really follow it. $ umask -S u=rwx,g=rx,o=rx I'm often confounded by Unix file permission. Do I even have a right plan? I'm using Debian lenny.

    Read the article

  • EPIPE blocks server

    - by timn
    I have written a single-threaded asynchronous server in C running on Linux: The socket is non-blocking and as for polling, I am using epoll. Benchmarks show that the server performs fine and according to Valgrind, there are no memory leaks or other problems. The only problem is that when a write() command is interrupted (because the client closed the connection), the server will encounter a SIGPIPE. I am doing the interrupted artifically by running the benchmarking utility "siege" with the parameter -b. It does lots of requests in a row which all work perfectly. Now I press CTRL-C and restart the "siege". Sometimes I am lucky and the server does not manage to send the full response because the client's fd is invalid. As expected errno is set to EPIPE. I handle this situation, execute close() on the fd and then free the memory related to the connection. Now the problem is that the server blocks and does not answer properly anymore. Here is the strace output: accept(3, {sa_family=AF_INET, sin_port=htons(50611), sin_addr=inet_addr("127.0.0.1")}, [16]) = 5 fcntl64(5, F_GETFD) = 0 fcntl64(5, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 epoll_ctl(4, EPOLL_CTL_ADD, 5, {EPOLLIN|EPOLLERR|EPOLLHUP|EPOLLET, {u32=158310248, u64=158310248}}) = 0 epoll_wait(4, {{EPOLLIN, {u32=158310248, u64=158310248}}}, 128, -1) = 1 read(5, "GET /user/register HTTP/1.1\r\nHos"..., 4096) = 161 write(5, "HTTP/1.1 200 OK\r\nContent-Type: t"..., 106) = 106 <<<<< write(5, "00001000\r\n", 10) = -1 EPIPE (Broken pipe) <<<<< Why did the previous write() work fine but not this one? --- SIGPIPE (Broken pipe) @ 0 (0) --- As you can see, the client establishes a new connection which consequently is accepted. Then, it's added to the EPOLL queue. epoll_wait() signalises that the client sent data (EPOLLIN). The request is parsed and and a response is composed. Sending the headers works fine but when it comes to the body, write() results in an EPIPE. It is not a bug in "siege" because it blocks any incoming connections, no matter from which client.

    Read the article

  • Upload image from Silverlight to ASP .net MVC 2.0 Action

    - by Raha
    Hello ... I have been trying really hard to Upload a photo from a Silverlight app to ASP .net MVC app. I simply use WebClient like this in Silverlight : private bool UploadFile(string fileName, Stream data) { try { UriBuilder ub = new UriBuilder("http://localhost:59933/Admin/Upload"); ub.Query = string.Format("filename={0}", fileName); WebClient wc = new WebClient(); wc.OpenReadCompleted += (sender, e) => { PushData(data, e.Result); e.Result.Close(); data.Close(); }; wc.OpenWriteAsync(ub.Uri); } catch (Exception) { throw; } return true; } private void PushData(Stream input, Stream output) { byte[] buffer = new byte[4096]; int byteRead; while ((byteRead = input.Read(buffer, 0, buffer.Length)) != 0) { output.Write(buffer, 0, byteRead); } } On the other hand I have created a simple action called Upload like this: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Upload() { //Some code here } The problem is that it doesn't hit the break point in Upload method. Is this even possible ? The reason why I am using Silverlight is that I want to preview the image before uploading it to server and it is very simple in Silverlight. I failed to do that in JavaScript and that is why Silverlight might be more useful here.

    Read the article

  • How to configure oracle instantclient for mono?

    - by funwithcoding
    Mono is really awesome. Some of our applications worked in linux out of the box even without recompiling the binary. However I am having tough time to configure oracle instantclient to use it with mono. I installed instantclient on a CentOS VM(by installing instantclient rpm) but however I did not find TNSNAMES.ORA anywhere. I searched for oracle and I found the following path contains the oracle libraries. [root@bagvapp rupert]# ll /usr/lib/oracle/11.2/client/lib/ total 143280 -rw-r--r-- 1 root root 7456 Aug 14 2009 cobsqlintf.o -rw-r--r-- 1 root root 342 Aug 14 2009 glogin.sql lrwxrwxrwx 1 root root 17 Mar 9 06:52 libclntsh.so -> libclntsh.so.11.1 -rw-r--r-- 1 root root 40088477 Aug 14 2009 libclntsh.so.11.1 -rw-r--r-- 1 root root 6986848 Aug 14 2009 libnnz11.so lrwxrwxrwx 1 root root 15 Mar 9 06:52 libocci.so -> libocci.so.11.1 -rw-r--r-- 1 root root 1879549 Aug 14 2009 libocci.so.11.1 -rw-r--r-- 1 root root 89377610 Aug 14 2009 libociei.so -rw-r--r-- 1 root root 152304 Aug 14 2009 libocijdbc11.so -rw-r--r-- 1 root root 1501651 Aug 14 2009 libsqlplusic.so -rw-r--r-- 1 root root 1218075 Aug 14 2009 libsqlplus.so -rw-r--r-- 1 root root 777979 Aug 14 2009 libsqora.so.11.1 -rw-r--r-- 1 root root 1996228 Aug 14 2009 ojdbc5.jar -rw-r--r-- 1 root root 2111220 Aug 14 2009 ojdbc6.jar -rw-r--r-- 1 root root 298388 Aug 14 2009 ottclasses.zip drwxr-xr-x 3 root root 4096 Mar 9 06:52 precomp -rw-r--r-- 1 root root 37807 Aug 14 2009 xstreams.jar no TNSPING available, no TNSNAMES.ORA, Now how to configure the mono to use this as the oracle client? and how to specify oracle database in app.config connection string section? Though mono is a powerful framework, seems like it is having problems like linux does, all the documentation is only available in mailing lists and whatever is available on official site is either outdated or not clear for normal user. Hope things will change soon and Mono will THE programming framework for linux.

    Read the article

  • Read HTML file into email

    - by cast01
    Ive written a script to send html emails and all works well. I have the email stored in a separate HTML file which is then read in usin a while loop and fgets(). However, i want to be able to pass variables into the html. For example, in a html file i may have something like.. <body> Dear Name <br/> Thank you for your recent purhcase </body> and i read this into a string like so $file = fopen($filename, "r"); while(!feof($file)) { $html.= fgets($file, 4096); } fclose ($file); I want to be able to replace "Name" in the html file by a variable and im not entirely sure on the best way to do this. I could always make my own tag and then use regex to replace that with the name once ive read the file into the string, but im wondering if there is a better/easier method to do this. (On a side note, if anyone knows whether its better to use file_get_contents instead of multiple calls to fgets, id be interested to know)

    Read the article

  • Cannot upload media via Wordpress uploader

    - by Justin Johnson
    This has to do with media uploading in Wordpress. Every time WP creates a folder for new uploads (it organizes uploads by year and month: yyyy/mm), it creates it with the "apache:apache' user and group, with full access to all (777 or drwxrwxrwx). However, after that, WP cannot create a folder within that folder (e.g.: mkdir 2011 succeeds, but mkdir 2011/01 fails). Also, uploads cannot be moved into these newly created folders even though the permissions are 777 (rwxrwxrwx). Once a month, I have to chown the newly created folders to be the same as user:group as the rest of the files. Once I do that, uploading works fine (which doesn't make sense to me The really frustrating part is that this problem doesn't exist in other WP installs on other domains on the same server. * I wasn't sure if this should be here or on serverfault. Edit: The containing directory /.../httpdocs/blog/wp-content/uploads has the correct ownership drwxrwxrwx 5 myuser psaserv 4096 Jun 3 18:38 uploads This is a Plesk/CentOS environment hosted by Media Temple (dv). I've written the following test script to simulate the problem <pre><?php $d = "d" . mt_rand(100, 500); var_dump( get_current_user(), $d, mkdir($d), chmod($d, 0777), mkdir("$d/$d"), chmod("$d/$d", 0777), fileowner($d), getmyuid() ); The script always creates the first directory mkdir($d) successfully. On domain A, where the WP problem is, it cannot create the nested directory mkdir("$d/$d"). However, on domain B, both directories are successfully created. I am running each script at /var/www/vhosts/domainA/httpdocs/tmp/t.php and /var/www/vhosts/domainB/httpdocs/tmp/t.php respectively I checked the permissions on tmp, httpdocs, and domain[AB] and they are the same for each path. The only thing that differs is the user.

    Read the article

  • Space-saving character encoding for japanese?

    - by Constantin
    In my opinion a common problem: character encoding in combination with a bitmap-font. Most multi-language encodings have an huge space between different character types and even a lot of unused code points there. So if I want to use them I waste a lot of memory (not only for saving multi-byte text - i mean specially for spaces in my bitmap-font) - and VRAM is mostly really valuable... So the only reasonable thing seems to be: Using an custom mapping on my texture for i.e. UTF-8 characters (so that no space is waste). BUT: This effort seems to be same with use an own proprietary character encoding (so also own order of characters in my texture). In my specially case I got texture space for 4096 different characters and need characters to display latin languages as well as japanese (its a mess with utf-8 that only support generall cjk codepages). Had somebody ever a similiar problem (I really wonder, if not)? If theres already any approach? Edit: The same Problem is described here http://www.tonypottier.info/Unicode_And_Japanese_Kanji/ but it doesnt provide an real solution how to save these bitmapfont mappings to utf-8 space efficent. So any further help is welcome!

    Read the article

  • Using GNU Octave FFT functions

    - by CFP
    Hello world! I'm playing with octave's fft functions, and I can't really figure out how to scale their output: I use the following (very short) code to approximate a function: function y = f(x) y = x .^ 2; endfunction; X=[-4096:4095]/64; Y = f(X); # plot(X, Y); F = fft(Y); S = [0:2047]/2048; function points = approximate(input, count) size = size(input)(2); fourier = [fft(input)(1:count) zeros(1, size-count)]; points = ifft(fourier); endfunction; Y = f(X); plot(X, Y, X, approximate(Y, 10)); Basically, what it does is take a function, compute the image of an interval, fft-it, then keep a few harmonics, and ifft the result. Yet I get a plot that is vertically compressed (the vertical scale of the output is wrong). Any ideas? Thanks!

    Read the article

  • WCF Fails when using impersonation over 2 machine boundaries (3 machines)

    - by MrTortoise
    These scenarios work in their pieces. Its when i put it all together that it breaks. I have a WCF service using netTCP that uses impersonation to get the callers ID (role based security will be used at this level) on top of this is a WCF service using basicHTTP with TransportCredientialOnly which also uses impersonation I then have a client front end that connects to the basicHttp. the aim of the game is to return the clients username from the netTCP service at the bottom - so ultimatley i can use role based security here. each service is on a different machine - and each service works when you remove any calls they make to other services when you run a client for them both locally and remotley. IE the problem only manifests when you jump accross more than one machine boundary. IE the setup breaks when i connect each part together - but they work fine on their own. I also specify [OperationBehavior(Impersonation = ImpersonationOption.Required)] in the method and have IIS setup to only allow windows authentication (actually i have ananymous enabled still, but disabling makes no difference) This impersonation works fine in the scenario where i have a netTCP Service on Machine A with a client with a basicHttp service on machine B with a clinet for the basicHttp service also on machine B ... however if i move that client to any machine C i get the following error: The exception is 'The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:10:00'' the inner message is 'An existing connection was forcibly closed by the remote host' Am beginning to think this is more a network issue than config ... but then im grasping at straws ... the config files are as follows (heading from the client down to the netTCP layer) <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="basicHttpBindingEndpoint" closeTimeout="00:02:00" openTimeout="00:02:00" receiveTimeout="00:10:00" sendTimeout="00:02:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://panrelease01/WCFTopWindowsTest/Service1.svc" binding="basicHttpBinding" bindingConfiguration="basicHttpBindingEndpoint" contract="ServiceReference1.IService1" name="basicHttpBindingEndpoint" behaviorConfiguration="ImpersonationBehaviour" /> </client> <behaviors> <endpointBehaviors> <behavior name="ImpersonationBehaviour"> <clientCredentials> <windows allowedImpersonationLevel="Impersonation"/> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> </configuration> the service for the client (basicHttp service and the client for the netTCP service) <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> <system.serviceModel> <bindings> <netTcpBinding> <binding name="netTcpBindingEndpoint" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> <basicHttpBinding> <binding name="basicHttpWindows"> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Windows"></transport> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="net.tcp://5d2x23j.panint.com/netTCPwindows/Service1.svc" binding="netTcpBinding" bindingConfiguration="netTcpBindingEndpoint" contract="ServiceReference1.IService1" name="netTcpBindingEndpoint" behaviorConfiguration="ImpersonationBehaviour"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> <behaviors> <endpointBehaviors> <behavior name="ImpersonationBehaviour"> <clientCredentials> <windows allowedImpersonationLevel="Impersonation" allowNtlm="true"/> </clientCredentials> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="WCFTopWindowsTest.basicHttpWindowsBehaviour"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <services> <service name="WCFTopWindowsTest.Service1" behaviorConfiguration="WCFTopWindowsTest.basicHttpWindowsBehaviour"> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="basicHttpWindows" name ="basicHttpBindingEndpoint" contract ="WCFTopWindowsTest.IService1"> </endpoint> </service> </services> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <directoryBrowse enabled="true" /> </system.webServer> </configuration> then finally the service for the netTCP layer <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <authentication mode="Windows"></authentication> <authorization> <allow roles="*"/> </authorization> <compilation debug="true" targetFramework="4.0" /> <identity impersonate="true" /> </system.web> <system.serviceModel> <bindings> <netTcpBinding> <binding name="netTCPwindows"> <security mode="Transport"> <transport clientCredentialType="Windows"></transport> </security> </binding> </netTcpBinding> </bindings> <services> <service behaviorConfiguration="netTCPwindows.netTCPwindowsBehaviour" name="netTCPwindows.Service1"> <endpoint address="" bindingConfiguration="netTCPwindows" binding="netTcpBinding" name="netTcpBindingEndpoint" contract="netTCPwindows.IService1"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mextcp" binding="mexTcpBinding" contract="IMetadataExchange"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:8721/test2" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="netTCPwindows.netTCPwindowsBehaviour"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="false" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <directoryBrowse enabled="true" /> </system.webServer> </configuration>

    Read the article

  • speed string search in PHP

    - by Marc
    Hi! I have a 1.2GB file that contains a one line string. What I need is to search the entire file to find the position of an another string (currently I have a list of strings to search). The way what I'm doing it now is opening the big file and move a pointer throught 4Kb blocks, then moving the pointer X positions back in the file and get 4Kb more. My problem is that a bigger string to search, a bigger time he take to got it. Can you give me some ideas to optimize the script to get better search times? this is my implementation: function busca($inici){ $limit = 4096; $big_one = fopen('big_one.txt','r'); $options = fopen('options.txt','r'); while(!feof($options)){ $search = trim(fgets($options)); $retro = strlen($search);//maybe setting this position absolute? (like 12 or 15) $punter = 0; while(!feof($big_one)){ $ara = fgets($big_one,$limit); $pos = strpos($ara,$search); $ok_pos = $pos + $punter; if($pos !== false){ echo "$pos - $punter - $search : $ok_pos <br>"; break; } $punter += $limit - $retro; fseek($big_one,$punter); } fseek($big_one,0); } } Thanks in advance!

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • Python's asyncore to periodically send data using a variable timeout. Is there a better way?

    - by Nick Sonneveld
    I wanted to write a server that a client could connect to and receive periodic updates without having to poll. The problem I have experienced with asyncore is that if you do not return true when dispatcher.writable() is called, you have to wait until after the asyncore.loop has timed out (default is 30s). The two ways I have tried to work around this is 1) reduce timeout to a low value or 2) query connections for when they will next update and generate an adequate timeout value. However if you refer to 'Select Law' in 'man 2 select_tut', it states, "You should always try to use select() without a timeout." Is there a better way to do this? Twisted maybe? I wanted to try and avoid extra threads. I'll include the variable timeout example here: #!/usr/bin/python import time import socket import asyncore # in seconds UPDATE_PERIOD = 4.0 class Channel(asyncore.dispatcher): def __init__(self, sock, sck_map): asyncore.dispatcher.__init__(self, sock=sock, map=sck_map) self.last_update = 0.0 # should update immediately self.send_buf = '' self.recv_buf = '' def writable(self): return len(self.send_buf) > 0 def handle_write(self): nbytes = self.send(self.send_buf) self.send_buf = self.send_buf[nbytes:] def handle_read(self): print 'read' print 'recv:', self.recv(4096) def handle_close(self): print 'close' self.close() # added for variable timeout def update(self): if time.time() >= self.next_update(): self.send_buf += 'hello %f\n'%(time.time()) self.last_update = time.time() def next_update(self): return self.last_update + UPDATE_PERIOD class Server(asyncore.dispatcher): def __init__(self, port, sck_map): asyncore.dispatcher.__init__(self, map=sck_map) self.port = port self.sck_map = sck_map self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.bind( ("", port)) self.listen(16) print "listening on port", self.port def handle_accept(self): (conn, addr) = self.accept() Channel(sock=conn, sck_map=self.sck_map) # added for variable timeout def update(self): pass def next_update(self): return None sck_map = {} server = Server(9090, sck_map) while True: next_update = time.time() + 30.0 for c in sck_map.values(): c.update() # <-- fill write buffers n = c.next_update() #print 'n:',n if n is not None: next_update = min(next_update, n) _timeout = max(0.1, next_update - time.time()) asyncore.loop(timeout=_timeout, count=1, map=sck_map)

    Read the article

  • Cannot open database "DataDir" requested by the login. The login failed

    - by Turkan Caglar
    Hi, I have a single user software that uses SQL Server database 2005. My computer was crushed while my software was on. I saved a copy of my database after I rescued my computer. However my software gives my following message. ( I don't know anything about SQL) However, I guess my database has the last login information. Since software was not shot down properly, it still thinks that I am logged in doesn't allow me to login again. How can I solve the problem? Can you help me? Or do you know any source that I can get help? Cannot open database "DataDir" requested by the login. The login failed ComputerName = GURELS User ID=sa;Initial Catalog=DataDir;Data Source=GURELS\CSS;Application Name=ChefTec User ID=sa;Initial Catalog=CTDir;Data Source=GURELS\CSS;Use Procedure for Prepare=1;Auto Translate=True;Packet Size=4096;Application Name=ChefTec;Workstation ID=GURELS;Use Encryption for Data=False;Tag with column collation when possible=False Thank you very much Turkan e-mail" [email protected] phone:617 259 6644

    Read the article

  • Client calls .asmx, Server exposes WCF endpoint

    - by Lucas B
    We have a client that has been configured to connect to an asmx service. We don't want to ask our customers to update their configuration, but we would like to upgrade our service to use WCF. Does anyone know if WCF supports this? If so, what would the configuration file look like? Our asmx service looks like this: <bindings> <binding name="ATransactionSoap" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://.../atransaction.asmx" binding="basicHttpBinding" bindingConfiguration="ATransactionSoap" contract="ATransactionSoap" name="ATransactionSoap" />

    Read the article

  • WCF client binding configuration in program code

    - by smarsha
    I have the following class that configures security, encoding, and token parameters but I am having trouble adding a BasicHttpBinding to specify a MaxReceivedMessageSize. Any insight would be appreciated. public class MultiAuthenticationFactorBinding { public static Binding CreateMultiFactorAuthenticationBinding() { HttpsTransportBindingElement httpTransport = new HttpsTransportBindingElement(); CustomBinding binding = new CustomBinding(); binding.Name = "myCustomBinding"; TransportSecurityBindingElement messageSecurity = TransportSecurityBindingElement.CreateUserNameOverTransportBindingElement(); messageSecurity.AllowInsecureTransport = true; messageSecurity.EnableUnsecuredResponse = true; messageSecurity.MessageSecurityVersion = MessageSecurityVersion.WSSecurity11WSTrust13WSSecureConversation13WSSecurityPolicy12; messageSecurity.SecurityHeaderLayout = SecurityHeaderLayout.Strict; messageSecurity.IncludeTimestamp = true; messageSecurity.SetKeyDerivation(false); TextMessageEncodingBindingElement Quota = new TextMessageEncodingBindingElement(MessageVersion.Soap11, System.Text.Encoding.UTF8); Quota.ReaderQuotas.MaxDepth = 32; Quota.ReaderQuotas.MaxStringContentLength = Int32.MaxValue; Quota.ReaderQuotas.MaxArrayLength = 16384; Quota.ReaderQuotas.MaxBytesPerRead = 4096; Quota.ReaderQuotas.MaxNameTableCharCount = 16384; X509SecurityTokenParameters clientX509SupportingTokenParameters = new X509SecurityTokenParameters(); clientX509SupportingTokenParameters.InclusionMode = SecurityTokenInclusionMode.AlwaysToRecipient; clientX509SupportingTokenParameters.RequireDerivedKeys = false; messageSecurity.EndpointSupportingTokenParameters.Endorsing.Add(clientX509SupportingTokenParameters); //binding.ReceiveTimeout = new TimeSpan(0,0,300); binding.Elements.Add(Quota); binding.Elements.Add(messageSecurity); binding.Elements.Add(httpTransport); return binding; } }

    Read the article

  • WCF Multiple Service Configuration Issue

    - by Goober
    Scenario Ignoring that fact that some of the settings might be wrong and inconsistent. Why does the program fail to compile when I try and put these 2 separate configurations for WCF Services into the same APP.CONFIG file? One was writen by myself and another by a friend, yet I cannot get the application to compile. What have I missed? ERROR Type Initialization Exception CODE <configuration> <system.serviceModel> <!--START Service 1 CONFIGURATION--> <bindings> <netTcpBinding> <binding name="tcpServiceEndPoint" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:05:00" enabled="true" /> <security mode="None"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint address="" binding="netTcpBinding" bindingConfiguration="tcpServiceEndPoint" contract="ListenerService.IListenerService" name="tcpServiceEndPoint" /> </client> <!--END Service 1 CONFIGURATION-->

    Read the article

  • Merge n files using a C program

    - by Amal
    I am writing a download Accelerator. So I download a file from the webserver into n parts. Now I want to merge the files into 1 single file. So I use the following code. And the file names are in the correct order. But the output file I am getting is different from the original download file. Can you tell me where could the error be ?C int cbd_merge_files(const char** filenames, int n, const char* final_filename) { FILE* fp = fopen(final_filename, "wb"); if (fp == NULL) return 1; char buffer[4097]; for (int i = 0; i < n; ++i) { const char* fname = filenames[i]; FILE* fp_read = fopen(fname, "rb"); if (fp_read == NULL) return 1; int n; while ((n = fread(buffer, sizeof(char), 4096, fp_read))) { int k = fwrite(buffer, sizeof(char), n, fp); if (!k) return 1; } fclose(fp_read); } fclose(fp); return 0; }

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >