Search Results

Search found 22656 results on 907 pages for 'free amazon plugin'.

Page 338/907 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Using iptables to selectively route outgoing requests?

    - by Olivier
    Hello, I'd like to set up my Dd-WRT firewall so that it uses the VPN service from a VPN provider to access a bunch of destinations and normal route foe all other requests. In detail: giganews.com would be accessed thru VPN VyprVPN normal web sites such as amazon, ebay, et al thru transparent firewall. I've run nito reading SOOOO many tutorials but I can't get to understand what the different entities are. Any help? Thxs

    Read the article

  • Mass bulk add domains to web hosting service (possible?)

    - by Scott
    I was wondering if anyone does bulk adding of domains to your web hosting provider (Amazon, Linode, Rackspace, etc). I am thinking of creating a product that allows user to host their site on top of my web hosting and want something that can allow me to bulk add domains (and point DNS to my web hosting DNS) with as little manual work as possible. I am thinking of getting a VPS to do this. Is this possible even? Thanks Scott

    Read the article

  • Eucalyptus / no bootable EBS yet - any alternatives to Eucalyptus?

    - by itgorilla
    I've read a couple of articles that Eucalyptus doesn't support bootable EBS yet. This is a problem since you can't make a backup form an instance like you a able on Amazon cloud or Rack Space Cloud. If you ever reboot the physical Ecalyptus sever that's running the node controller the instance is gone along with all your settings. Are there any alternatives to Eucalyptus or is just the only game in town when it comes to open source cloud?

    Read the article

  • Why is mount -a not mounting fuse drive properly when executed remotely (via Fabric)?

    - by Jim D
    This is a weird bug and I'm not sure where it's coming from. Here's a quick run down of what I'm doing. I'm trying to mount a FUSE drive to an Amazon EC2 instance running Ubuntu 10.10 using s3fs (FUSE over Amazon). s3fs is compiled from source according to the instructions etc. I've also added an entry to /etc/fstab so that the drive mounts on boot. Here's what /etc/fstab looks like: # /etc/fstab: static file system information. # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 LABEL=uec-rootfs / ext4 defaults 0 0 /dev/sda2 /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 /dev/sda3 none swap sw,comment=cloudconfig 0 0 s3fs#mybucket /mnt/s3/mybucket fuse default_acl=public-read,use_cache=/tmp,allow_other 0 0 So the good news is that this works fine. On reboot the connection mounts correctly. I can also do: $ sudo umount /mnt/s3/mybucket $ sudo mount -a $ mountpoint /mnt/s3/mybucket /mnt/s3/mybucket is a mountpoint Great, right? Well here's the problem. I'm using Fabric to automate the process of building and managing this instance. I noticed I was getting this error message when using Fabric to build s3fs and set up the mount process: mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected I isolated it down the the problem and built a fabric task that reproduces the problem: def remount_s3fs(): sudo("mount -a") Which does: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: mount -a [And yes, I was sure to unmount it before running this task.] When I check the mount using mountpoint I get: $ mountpoint /mnt/s3/mybucket mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected Done. But if I run sudo mount -a at the command line, it works. Hrm. Here is that fab task output again, this time in full debug mode: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a" Again, I get that transport endpoint not connected error. I've also tried copying and pasting the exact command run into my ssh session (i.e. sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a") and it works fine. So...that's my problem. Any ideas?

    Read the article

  • linksys WRT350N dd-wrt: how can i give upload priority to slingbox ?

    - by ufk
    Hi. i have the router WRT350N by Linksys I've installed dd-wrt v2.4 sp1 on it how can i configure it to give upload priority to slingbox ? The Slingbox is a TV streaming device that enables users to remotely view their home's cable. it seems that slingbox streams to amazon ec2 servers and from there to the client so i can't really give priority based on ip-address.

    Read the article

  • Design PDF template and populate data at runtime using java,xml etc..

    - by Samant
    well i have been looking for a java based PDF solutions...we dont have a clean way i guess-still.. all solutions are primitive and kind of workarounds... No easy solution for this requirement - 1. Designing a PDF template using a IDE (eg. Livecycle designer ..which is not free) 2. Then at runtime using java, populate data into this PDF template...either using xml or other datasources... such a simple requirement and NONE has a good "open-source and free" solution yet ! Is anyone aware of any ? I have been searching for since 3-4 years now..for a clean way out... Eclipse BIRT comes close.. but does not handle Barcode elements ..OOB. Jasper - ireport is also good but that tool does not have a table concept and is kind of annoying ! Also barcode support is not good. XSL-FO has not free IDE for design . Looking for a better answer .. got one ?

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • Building Paypal based membership website - total noob - would appreciate help

    - by Ali
    this is a follow up on my question on paypal integration. I'm working ona membership site for racing fans. My membership site has 3 membership levels - free, gold and premium. When a user signs up he/she can gets a free membership on the spot but has the option to upgrade to a gold membership for 4 Dollars a month or a premium membership for 10 Dollars a month. I've gone through the paypal integration guide a few times though and have a vague understanding of how to get this to work. I think the recurring payments option would be fine enough - however I don't know how do I implement this in my system. Like when a user decides to go for a paid account i.e. Gold or premium from basic - what should I do on both my code side and on the paypal account side - I'd really appreciate if anyone would outline what I'd have to do here. Plus when a user decides to upgrade from lets say a Gold to a premium account - there is the issue of computing how much should be charged to upgrade his/her account eg: a user has been billed for 4 dollars and the next day opts to go for a premium account so assuming that the surplus for the rest of the month is 5 dollars and further from that all payments would be recurring 10 dollars monthly - how do I implement this? And in case a user decides to downgrade from a premium account of 10 dollars a month to a gold account of 4 dollars a month - how do I handle the surplus which would have to be refunded for that month alone and changing the membership? And like wise if someone wishes to cancel membership and go to having a free account - how do I refund whatever is owed and cancel the subscription. I'm sorry if it sounds like I'm asking to be spoon fed :( I'm quite new to this and this is for a client and I would really appreciate all the help here and really have to get this working right. Thanks again everyone - waiting for all your replies.

    Read the article

  • Licensing iPhone apps per user in existing system

    - by Alxandr
    I've been asked by my job to write a iPhone app for an existing system for managing worktasks. This system is proprietary and costs money, so in order to login you need to be a customer. Now, I've got two questions about the legality of licensing iPhone apps with this system: My company would like to be able to sell the app for profit, but not as a one-time payment, but as a added subscription-fee to the already existing one. Is it legal for us (according with the terms of distributing an iPhone app on the Apple App Store) to do this? That way we'll just add another field to the users-database saying weather or not iPhone is enabled for them, and distribute the app as a free app on App Store. If the previous question is not legal, we'd like to just create a free app and distribute it as part of the existing system. In other words, no extra fee for using the iPhone app for the users, but still free distribution trough App Store. Due to our company not being american or having an office in the U.S. at all enterprice account is not an option. Please let me know if there is anything wrong with any of the above approaches.

    Read the article

  • Question about WeakHashMap

    - by michael
    Hi, In the Javadoc of "http://java.sun.com/j2se/1.4.2/docs/api/java/util/WeakHashMap.html", it said "Each key object in a WeakHashMap is stored indirectly as the referent of a weak reference. Therefore a key will automatically be removed only after the weak references to it, both inside and outside of the map, have been cleared by the garbage collector." And then Note that a value object may refer indirectly to its key via the WeakHashMap itself; that is, a value object may strongly refer to some other key object whose associated value object, in turn, strongly refers to the key of the first value object. But should not both Key and Value should be used weak reference in WeakHashMap? i.e. if there is low on memory, GC will free the memory held by the value object (since the value object most likely take up more memory than key object in most cases)? And if GC free the Value object, the Key Object can be free as well? Basically, I am looking for a HashMap which will reduce memory usage when there is low memory (GC collects the value and key objects if necessary). Is it possible in Java? Thank you.

    Read the article

  • Which are the RDBMS that minimize the server roundtrips? Which RDBMS are better (in this area) than

    - by user193655
    When the latency is high ("when pinging the server takes time") the server roundtrips make the difference. Now I don't want to focus on the roundtrips created in programming, but the roundtrips that occur "under the hood" in the DB engine, so the roundtrips that are 100% dependant on how the RDBMS is written itself. I have been told that FireBird has more roundtrips than MySQL. But this is the only information I know. I am currently supporting MS SQL but I'd like to change RDBMS (because I use Express Editions and in my scenario they are quite limiting from the performance point of view), so to make a wise choice I would like to include also this point into "my RDBMS comparison feature matrix" to understand which is the best RDBMS to choose as an alternative to MS SQL. So the bold sentence above would make me prefer MySQL to Firebird (for the roundtrips concept, not in general), but can anyone add informations? And MS SQL where is it located? Is someone able to "rank" the roundtrip performance of the main RDBMS, or at least: MS SQL, MySql, Postegresql, Firebird (I am not interested in Oracle since it is not free, and if I have to change I would change to a free RDBMS). Anyway MySql (as mentioned several times on stackoverflow) has a not clear future and a not 100% free license. So my final choice will probably dall on PostgreSQL or Firebird. Additional info: somehow you can answer my question by making a simple list like: MSSQL:3; MySQL:1; Firebird:2; Postgresql:2 (where 1 is good, 2 average, 3 bad). Of course if you can post some links where the roundtrips per RDBMSs are compared it would be great

    Read the article

  • gdb: Cannot find new threads: generic error

    - by Alexander Gladysh
    When I run GDB against a program which loads a .so which is linked to pthreads, GDB reports error "Cannot find new threads: generic error". I probably miss something in my Ubuntu configuration (as it was installed from minimal install). Any clues? $ gdb --args lua -lluarocks.require GNU gdb (GDB) 7.0-ubuntu Copyright (C) 2009 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /usr/bin/lua...(no debugging symbols found)...done. (gdb) run Starting program: /usr/bin/lua -lluarocks.require Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio require 'ev' [Thread debugging using libthread_db enabled] Cannot find new threads: generic error (gdb) q A debugging session is active. Inferior 1 [process 4986] will be killed. Quit anyway? (y or n) y This function gets called on require 'ev': http://github.com/brimworks/lua-ev/blob/master/lua_ev.c#L25-65 Additional information about my system: $ uname -a Linux localhost 2.6.31-20-generic #58-Ubuntu SMP Fri Mar 12 04:38:19 UTC 2010 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 9.10 Release: 9.10 Codename: karmic

    Read the article

  • Assign fixed IP address via DHCP by DNS lookup

    - by Janoszen
    Preface I'm building a virtualization environment with Ubuntu 14.04 and LXC. I don't want to write my own template since the upgrade from 12.04 to 14.04 has shown that backwards compatibility is not guaranteed. Therefore I'm deploying my virtual machines via lxc-create, using the default Ubuntu template. The DNS for the servers is provided by Amazon Route 53, so no local DNS server is needed. I also use Puppet to configure my servers, so I want to keep the manual effort on the deployment minimal. Now, the default Ubuntu template assigns IP addresses via DHCP. Therefore, I need a local DHCP server to assign IP addresses to the nodes, so I can SSH into them and get Puppet running. Since Puppet requires a proper DNS setup, assigning temporary IP addresses is not an option, the client needs to get the right hostname and IP address from the start. Question What DHCP server do I use and how do I get it to assign the IP address based only on the host-name DHCP option by performing a DNS lookup on that very host name? What I've tried I tried to make it work using the ISC DHCP server, however, the manual clearly states: Please be aware that only the dhcp-client-identifier option and the hardware address can be used to match a host declaration, or the host-identifier option parameter for DHCPv6 servers. For example, it is not possible to match a host declaration to a host-name option. This is because the host-name option cannot be guaranteed to be unique for any given client, whereas both the hardware address and dhcp-client-identifier option are at least theoretically guaranteed to be unique to a given client. I also tried to create a class that matches the hostname like this: class "my-client-name" { match if option host-name = "my-client-name"; fixed-address my-client-name.my-domain.com; } Unfortunately the fixed-address option is not allowed in class statements. I can replace it with a 1-size pool, which works as expected: subnet 10.103.0.0 netmask 255.255.0.0 { option routers 10.103.1.1; class "my-client-name" { match if option host-name = "my-client-name"; } pool { allow members of "my-client-name"; range 10.103.1.2 10.103.1.2; } } However, this would require me to administer the IP addresses in two places (Amazon Route53 and the DHCP server), which I would prefer not to do. About security Since this is only used in the bootstrapping phase on an internal network and is then replaced by a static network configuration by Puppet, this shouldn't be an issue from a security standpoint. I am, however, aware that the virtual machine bootstraps with "ubuntu:ubuntu" credentials, which I intend to fix once this is running.

    Read the article

  • Is this video card good for 3 monitors? [on hold]

    - by Brian Briu
    Video card: http://www.amazon.de/Zotac-NVIDIA-GeForce-Grafikkarte-Speicher/dp/B00D4KCSHG/ref=sr_1_11?ie=UTF8&qid=1408575042&sr=8-11&keywords=grafikkarte 3 Monitors like this: http://www.mediamarkt.de/mcs/product/ASUS-MX-239-H,48353,462790,951629.html?langId=-3 If yes how can I connect them? If no, i'll buy one more video card like that. Also tell me what motherboard should I buy for these things + 32 ram and i7 (remember that in the future i'll buy one morre video card (to have 2). Power supply also?

    Read the article

  • limit_req causing 503 Service Unavailable

    - by Hermione
    I'm frequently getting 503 Service Unavailable when I have limit_req turned on. On my logs: [error] 22963#0: *70136 limiting requests, excess: 1.000 by zone "blitz", client: 64.xxx.xxx.xx, server: dat.com, request: "GET /id/85 HTTP/1.1", host: "dat.com" My nginx configuration: limit_req_zone $binary_remote_addr zone=blitz:60m rate=5r/s; limit_req zone=blitz; How do I resolve this issue. Isn't 60m already big enough? All my static files are hosted on a amazon s3.

    Read the article

  • European alternatives to Dropbox?

    - by torbengb
    Dropbox is positively brilliant, but the data center is probably in the US somewhere. Since I'm in Europe, there's plenty of lag and a poor upload rate. Are there any similar services using data centers in Europe? I'm looking for a free plan (cirka 2GB), so sites like Amazon S3 aren't good answers.

    Read the article

  • Windows Server not installing audio drivers properly

    - by user34322
    I am trying to install a dummy sound card on a Windows Server machine (in Amazon EC2 cloud) in order for one of my application to work. I'm trying to accomplish that with Virtual Audio Cable and REAUDIO3. Both tools managed to install a new device on my Windows XP machine, but on Windows Server, no new devices appear. Windows Audio and Plug and Play services are Started and Automatic. ANY ideas are greatly appreciated.

    Read the article

  • SSH: Configure ssh_config to use specific key file for a specific server fingerprint

    - by Penthi
    I have a key based login for a server. The IP and DNS of the server can change, because it is hosted on Amazon. Is there a way to configure the ssh client config to use the specific key file for this server only, when the fingerprint of the server matches? In other words: Normaly servers are matched by IP or DNS in the ssh client config. I want to do this by fingerprint, becaus IP and DNS can change.

    Read the article

  • What's the best way to backup a web server with 30GB of data?

    - by andypa
    I currently have a server(Linux) running with around 10'000 users daily on it. The hoster offers a backup which I'm also using. Although I trust my hoster, I would like to have an offsite backup, just in case the hoster goes down for a longer time or goes bankrupt (you never know). My idea was to tar and split the data and copy the archive to my Amazon S3 account but I'm wondering if that's the best idea? Any tip is appreciated. Thanks, Andy

    Read the article

  • Replacing a unicode character in UTF-8 file using delphi 2010

    - by Jake Snake
    I am trying to replace character (decimal value 197) in a UTF-8 file with character (decimal value 65) I can load the file and put it in a string (may not need to do that though) SS := TStringStream.Create(ParamStr1, TEncoding.UTF8); SS.LoadFromFile(ParamStr1); //S:= SS.DataString; //ShowMessage(S); However, how do i replace all 197's with a 65, and save it back out as UTF-8? SS.SaveToFile(ParamStr2); SS.Free; -------------- EDIT ---------------- reader:= TStreamReader.Create(ParamStr1, TEncoding.UTF8); writer:= TStreamWriter.Create(ParamStr2, False, TEncoding.UTF8); while not Reader.EndOfStream do begin S:= reader.ReadLine; for I:= 1 to Length(S) do begin if Ord(S[I]) = 350 then begin Delete(S,I,1); Insert('A',S,I); end; end; writer.Write(S + #13#10); end; writer.Free; reader.Free;

    Read the article

  • C two functions in one with casts

    - by Favolas
    I have two functions that do the exact same thing but in two different types of struct and this two types of struct are very similar. Imagine I have this two structs. typedef struct nodeOne{ Date *date; struct nodeOne *next; struct nodeOne *prev; }NodeOne; typedef struct nodeTwo{ Date *date; struct nodeTwo *next; struct nodeTwo *prev; }NodeTwo; Since my function to destroy each of the list is almost the same (Just the type of the arguments are different) I would like to make just one function to make the two thins. I have this two functions void destroyListOne(NodeOne **head, NodeOne **tail){ NodeOne *aux; while (*head != NULL){ aux = *head; *head = (*head)->next; free(aux); } *tail = NULL; } and this one: void destroyListTwo(NodeTwo **head, NodeTwo **tail){ NodeTwo *aux; while (*head != NULL){ aux = *head; *head = (*head)->next; free(aux); } *tail = NULL; } Since they are very similar I thought making something like this: void destroyList(void **ini, void **end, int listType){ if (listType == 0) { NodeOne *aux; NodeOne head = (NodeOne) ini; NodeOne tail = (NodeOne) ed; } else { NodeTwo *aux; NodeTwo head = (NodeTwo) ini; NodeTwo tail = (NodeTwo) ed; } while (*head != NULL){ aux = *head; *head = (*head)->next; free(aux); } *tail = NULL; } As you may now this is not working but I want to know if this is possible to achieve. I must maintain both of the structs as they are.

    Read the article

  • Connect a Laptop with a 5.1 Sound System

    - by mfcabrera
    I will be replacing my desktop computer with a laptop and I would like to connect it to my existing 5.1 sound system. Now, the Sound System have three different input channels, but most laptops only have one sound output channel. what do I need in order to connect my future laptop to my sound system? Do I need a sound card? Example Laptop: ASUS N53JF-XE1 http://www.amazon.com/gp/product/B00492C6TI/ref=ord_cart_shr?ie=UTF8&m=ATVPDKIKX0DER

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >