Search Results

Search found 25343 results on 1014 pages for 'write protect'.

Page 109/1014 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • What are the necessary periodic checks for server?

    - by Edmund
    Hi all, I have some server which my team use for hosting internal applications for development purpose. I thinking of setting up some periodic checks but do now know how to go about it. Can advise on the following? Preferably windows bat file or linux script How to write a script that will check the content of a webpage to verify if it is down. How to write a script that will check if the website is down by pinging it How to write a script that will check the diskspace of the server is running out of diskspace. How to write a script that will email back to system administrator if either of the above tasks are not fulfilled?

    Read the article

  • One of the partion on a usb harddisk cannot automount

    - by holmescn
    It is a very strange problem. My usb harddisk has four partitions, one is primary, the other three are logical (contained within an extended partition). When I plug in the disk, three of the partitions are mounted automatically except one--the first logical partition in the extended partition. Initially I thought it is the problem of system (at that time I used Mint). But after I change to Ubuntu 12.04, the problem wasn't solved. I don't want to add a rule in fstab, and I want to know what happened. The disk is fine, and the partition can be accessed in Windows and mounted manually. result of dmesg | tail: [100933.557649] usb 2-1.2: new high-speed USB device number 4 using ehci_hcd [100933.651891] scsi8 : usb-storage 2-1.2:1.0 [100934.649047] scsi 8:0:0:0: Direct-Access SAMSUNG PQ: 0 ANSI: 0 [100934.650963] sd 8:0:0:0: Attached scsi generic sg2 type 0 [100934.651342] sd 8:0:0:0: [sdb] 625142448 512-byte logical blocks: (320 GB/298 GiB) [100934.651977] sd 8:0:0:0: [sdb] Write Protect is off [100934.651989] sd 8:0:0:0: [sdb] Mode Sense: 03 00 00 00 [100934.652836] sd 8:0:0:0: [sdb] No Caching mode page present [100934.652848] sd 8:0:0:0: [sdb] Assuming drive cache: write through [100934.655354] sd 8:0:0:0: [sdb] No Caching mode page present [100934.655367] sd 8:0:0:0: [sdb] Assuming drive cache: write through [100934.734652] sdb: sdb1 sdb3 < sdb5 sdb6 sdb7 > [100934.737706] sd 8:0:0:0: [sdb] No Caching mode page present [100934.737725] sd 8:0:0:0: [sdb] Assuming drive cache: write through [100934.737731] sd 8:0:0:0: [sdb] Attached SCSI disk result of parted -l: Model: SAMSUNG (scsi) Disk /dev/sdb: 320GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 21.5GB 21.5GB primary ntfs 3 21.5GB 320GB 299GB extended lba 5 21.5GB 129GB 107GB logical ntfs 6 129GB 236GB 107GB logical ntfs 7 236GB 320GB 83.8GB logical ntfs

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Configuring Linux Network

    - by Reiler
    Hi I'm working on some software, that runs on a Centos 5.xx installation. I'ts not allowed for our customers to log in to Linux, everything is done from Windows applications, developed by us. So we have build a frontend for the user to configure network setup: Static/DHCP, ip-address, gateway, DNS, Hostname. Right now I let the user enter the information in the Windows app, and then write it on the Linux server like this: Write to /etc/resolv.conf: Nameserver Write to /etc/sysconfig/network: Gateway and Hostname Write to /etc/sysconfig/network-scripts/ifcfg-eth0: Ipaddress, Netmask, Bootproto(DHCP or Static) I also (after some time) found out that I was unable to send mail, unless I wrote in /etc/hosts: 127.0.0.1 Hostname All this seems to work, but is there a better/easier way to do this? Also, I read the network configuration nearly the same way, but if I use DHCP, I miss som information, for instance the Ip-address. I know that I can get some information from the commandline (ifconfig), but I dont get for instance Hostname, Gateway and DNS. Is there a commandline tool that will display this?

    Read the article

  • Permission folders redhat10

    - by aryan
    Dear all, I have written a data DVD by k3b and now that I paste DVD on my system I have limitation in order to read and write on it's folders. I tried to set their permission but it's not possible. I mean that when I set File access to Read and write and press Apply permission to enclosed files botton, after a few seconds my setting (Read and write) will be disappeared and it returns to ---. Can any one guide m, please? Thanks

    Read the article

  • MegaCli newly created disk doesn't appear under /dev/sdX

    - by Henry-Nicolas Tourneur
    After having successfully added 2 new disks in a new RAID virtual drive (background initialization done), I would have exepected it to appear under /dev/sdh but it's not there (so, unusable). The system is running a CentOS 5.2 64 bits, HAL and udev daemons are running, not records of any sdh apparition under the messsage log file or in dmesg, only MegaCli do see that virtual drive. Any idea ? Some data: [root@server ~]# ./MegaCli -LDInfo -LALL -a0 Adapter 0 -- Virtual Drive Information: Virtual Disk: 0 (target id: 0) Name: RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0 Size:139392MB State: Optimal Stripe Size: 64kB Number Of Drives:2 Span Depth:1 Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Virtual Disk: 1 (target id: 1) Name: RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0 Size:285568MB State: Optimal Stripe Size: 64kB Number Of Drives:2 Span Depth:1 Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default [root@server ~]# ls -l /dev/disk/by-id/scsi-360* lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09 -> ../../sda lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09-part2 -> ../../sda2 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe07e78f94940c0000a0ee -> ../../sdf lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe07e78f94940c0000a0ee-part1 -> ../../sdf1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe972a3f91240a0000005f -> ../../sdb lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe972a3f91240a0000005f-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fea7e18f94640c000020ec -> ../../sde lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fea7e18f94640c000020ec-part1 -> ../../sde1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0feb7da8f94340c0000203d -> ../../sdd lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0feb7da8f94340c0000203d-part1 -> ../../sdd1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fed7d78f94040c000080b7 -> ../../sdc lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fed7d78f94040c000080b7-part1 -> ../../sdc1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a05830145e58e0b9c479000010a1 -> ../../sdg lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a05830145e58e0b9c479000010a1-part1 -> ../../sdg1

    Read the article

  • Get to Know a Candidate (19-25 of 25): Independent Candidates

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting.  Information sourced for Wikipedia. The following independent candidates have gained access to at least one state ballot. Richard Duncan, of Ohio; Vice-presidential nominee: Ricky Johnson Candidate Ballot Access: Ohio - (18 Electoral)  Write-In Candidate Access: Alaska, Florida, Indiana, Maryland Randall Terry, of West Virginia; Vice-presidential nominee: Missy Smith Candidate Ballot Access: Kentucky, Nebraska, West Virginia - (18 Electoral)  Write-In Candidate Access: Colorado, Indiana Sheila Tittle, of Texas; Vice-presidential nominee: Matthew Turner Candidate Ballot Access: Colorado, Louisiana - (17 Electoral) Jeff Boss, of New Jersey; Vice-presidential nominee: Bob Pasternak Candidate Ballot Access: New Jersey - (14 Electoral) Dean Morstad, of Minnesota; Vice-presidential nominee: Josh Franke-Hyland Candidate Ballot Access: Minnesota - (10 Electoral)  Write-In Candidate Access: Utah Jill Reed, of Wyoming; Vice-presidential nominee: Tom Cary Candidate Ballot Access: Colorado - (9 Electoral)  Write-In Candidate Access: Indiana, Florida Jerry Litzel, of Iowa; Vice-presidential nominee: Jim Litzel Candidate Ballot Access: Iowa - (6 Electoral) That wraps it up people. We have reviewed 25 presidential candidates in the 2012 U.S. election. Look for more blog posts about the election to come.

    Read the article

  • RAID 1 not performing as expected

    - by Faken
    I recently bought a new 320Gb hard drive for my computer to set up RAID 1 on it for some added security. Installation went as smooth as could possibly be (plug in power, plug in data cable, start up computer, Intel software recognized new drive, right click create RAID 1, done!). However, for some inexplicable reason, I seem to have strange test results when using BENCH32. On my old configuration, a single 7200 rpm drive, I achieved about 60 MB/s write and 70 MB/s read. With a new RAID 1 configuration, I would expect the write to be slightly diminished but read to be significantly improved (though not exactly double speed). However, with the new configuration, I am getting 90 MB/s write and only about 80 MB/s read. I should NOT be getting improved write performance, especially NOT better than read! What's going on? My system setup is: q6600 2.4ghz CPU 4Gb DDR2 667mhz RAM on board Intel ICH9R "RAID chip" 2x Seagate 7200 RPM 320GB drives in RAID 1 Widows 7 home premium 64-bit

    Read the article

  • On Windows Server 2003, permissions are not propagating to a group that is a member of a group

    - by Joshua K
    Windows Server 2003 on i386. FTP server is running as the SYSTEM user/group. Some files we want served (read and write) are owned by the group 'ftp.' ftp has full read/write/whatever permissions on those files and directories. SYSTEM can't read/write those directories. So, I added SYSTEM to the 'ftp' group. Windows happily complied, but even after restarting Filezilla, it still could not read/write those files. Is there any way to do what we want without "re-permissioning" all those files? Running the ftp server as 'ftp' isn't really an option because it also serves files that are owned by SYSTEM (And not ftp). Sigh... :) Any insights?

    Read the article

  • SearchServer2008Express Search Webservice

    - by Mike Koerner
    I was working on calling the Search Server 2008 Express search webservice from Powershell.  I kept getting <ResponsePacket xmlns="urn:Microsoft.Search.Response"><Response domain=""><Status>ERROR_NO_RESPONSE</Status><DebugErrorMessage>The search request was unable to connect to the Search Service.</DebugErrorMessage></Response></ResponsePacket>I checked the user authorization, the webservice search status, even the WSDL.  Turns out the URL for the SearchServer2008 search webservice was incorrect.  I was calling $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"and it should have been$URI= "http://ss2008/_vti_bin/search.asmx?WSDL"Here is my sample powershell script:# WSS Documentation http://msdn.microsoft.com/en-us/library/bb862916.aspx$error.clear()#Bad SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"#Good SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/search.asmx?WSDL"$search = New-WebServiceProxy -uri $URI -namespace WSS -class Search -UseDefaultCredential $queryXml = "<QueryPacket Revision='1000'>  <Query >    <SupportedFormats>      <Format revision='1'>urn:Microsoft.Search.Response.Document.Document</Format>    </SupportedFormats>    <Context>      <QueryText language='en-US' type='MSSQLFT'>SELECT Title, Path, Description, Write, Rank, Size FROM Scope() WHERE CONTAINS('Microsoft')</QueryText>      <!--<QueryText language='en-US' type='TEXT'>Microsoft</QueryText> -->    </Context>  </Query></QueryPacket>" $statusResponse = $search.Status()write-host '$statusResponse:'  $statusResponse $GetPortalSearchInfo = $search.GetPortalSearchInfo()write-host '$GetPortalSearchInfo:'  $GetPortalSearchInfo $queryResult = $search.Query($queryXml)write-host '$queryResult:'  $queryResult

    Read the article

  • Hard Disk Not Counting Reallocated Sectors

    - by MetaNova
    I have a drive that is reporting that the current pending sectors is "45". I have used badblocks to identify the sectors and I have been trying to write zeros to them with dd. From what I understand, when I attempt writing data directly to the bad sectors, it should trigger a reallocation, reducing current pending sectors by one and increasing the reallocated sector count. However, on this disk both Reallocated_Sector_Ct and Reallocated_Event_Count raw values are 0, and dd fails with I/O errors when I attempt to write zeros to the bad sectors. dd works fine, however, when I write to a good sector. # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=217152 dd: error writing ‘/dev/sdb’: Input/output error Does this mean that my drive, in some way, has no spare sectors to be used for reallocation? Is my drive just in general a terrible person? (The drive isn't actually mine, I'm helping a friend out. They might have just gotten a cheap drive or something.) In case it is relevant, here is the output of smartctl -i : Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00Z5B1 Serial Number: WD-WMAVU3027748 LU WWN Device Id: 5 0014ee 25998d213 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Fri Oct 18 17:47:29 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled UPDATE: I have run shred on the disk, which has caused Current_Pending_Sector to go to zero. However, Reallocated_Sector_Ct and Reallocated_Event_Count are still zero, and dd is now able to write data to the sectors it was previously unable to. This leads me with several other questions: Why aren't the reallocations being recored by the disk? I'm assuming the reallocation took place as I can now write data directly to the sector and couldn't before. Why did shred cause reallocation and not dd? Does the fact that shred writes random data instead of just zeros make a difference?

    Read the article

  • Copy to USB memory stick really slow?

    - by Eloff
    When I copy files to the USB device, it takes much longer than in windows (same usb device, same port) it's faster than USB 1.0 speeds (1MB/s) but much slower than USB 2.0 speeds (12MB/s). To copy 1.8GB takes me over 10 minutes (it should be < 3 min.) I have two identical SanDisk Cruzer 8GB sticks, and I have the same problem with both. I have a super talent 32GB USB SSD in the neighboring port and it works at expected speeds. The problem I seem to see in the GUI is that the progress bar goes to 90% almost instantly, completes to 100% a little slower and then hangs there for 10 minutes. Interrupting the copy at this point seems to result in corruption at the tail end of the file. If I wait for it to complete the copy is successful. Any ideas? dmesg output below: [64059.432309] usb 2-1.2: new high-speed USB device number 5 using ehci_hcd [64059.526419] scsi8 : usb-storage 2-1.2:1.0 [64060.529071] scsi 8:0:0:0: Direct-Access SanDisk Cruzer 1.14 PQ: 0 ANSI: 2 [64060.530834] sd 8:0:0:0: Attached scsi generic sg4 type 0 [64060.531925] sd 8:0:0:0: [sdd] 15633408 512-byte logical blocks: (8.00 GB/7.45 GiB) [64060.533419] sd 8:0:0:0: [sdd] Write Protect is off [64060.533428] sd 8:0:0:0: [sdd] Mode Sense: 03 00 00 00 [64060.534319] sd 8:0:0:0: [sdd] No Caching mode page present [64060.534327] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.537988] sd 8:0:0:0: [sdd] No Caching mode page present [64060.537995] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.541290] sdd: sdd1 [64060.544617] sd 8:0:0:0: [sdd] No Caching mode page present [64060.544619] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.544621] sd 8:0:0:0: [sdd] Attached SCSI removable disk

    Read the article

  • Configuring Linux Network

    - by Reiler
    Hi I'm working on some software, that runs on a Centos 5.xx installation. I'ts not allowed for our customers to log in to Linux, everything is done from Windows applications, developed by us. So we have build a frontend for the user to configure network setup: Static/DHCP, ip-address, gateway, DNS, Hostname. Right now I let the user enter the information in the Windows app, and then write it on the Linux server like this: Write to /etc/resolv.conf: Nameserver Write to /etc/sysconfig/network: Gateway and Hostname Write to /etc/sysconfig/network-scripts/ifcfg-eth0: Ipaddress, Netmask, Bootproto(DHCP or Static) I also (after some time) found out that I was unable to send mail, unless I wrote in /etc/hosts: 127.0.0.1 Hostname All this seems to work, but is there a better/easier way to do this? Also, I read the network configuration nearly the same way, but if I use DHCP, I miss som information, for instance the Ip-address. I know that I can get some information from the commandline (ifconfig), but I dont get for instance Hostname, Gateway and DNS. Is there a commandline tool that will display this?

    Read the article

  • rfid programming

    - by MaKo
    hi guys, I got a gift from a friend, 2 readers for RFID, and some cards (from a Chinese company called daily rfid), the kind of work, because it comes with some demo software written in Delphi, that reads the id of the card (myfare compatible, ISO14443A ) but the problem is that if I try to use the demo to write to them, it doesnt seem to work, it have another demo written in c#, (compiled and executable from /bin/debug, the DL600DemoCSharp.exe), the software opens, but when click on connect, I get this error Unhanded exception.. unable to load DLL 'BasicB.DLL' so I load the dll on windows/system32, but when I try regsvr32 BasicB.dll I get, error the module "BasicB.dll" was loaded but hte entry-point DllRegisterServer was not found. Make sure that "BasicB.dll" is a valid DLL or OCX file and then try again have written to the company but no response, I program in objective C, so I kind of understand c#, but how to make this cards work? - shall I continue with the delphi, and try to write to them with it - or with c# - either way I would have to write the code to read write to them,, or is there any software to work with this modules?? thanks a lot!

    Read the article

  • htaccess problem

    - by rohit
    my htaccess file is belkow DirectoryIndex index.php RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^(.*)//(.*)$ RewriteRule . %1/%2 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^.*$ index.php?qa-rewrite=$0&%{QUERY_STRING} [L] RewriteCond %{HTTP_HOST} ^www\.domain\.co\.cc$ [NC] RewriteRule ^(.*)$ http://domain.co.cc/$1 [L,R=301] when i write www.domain.co.cc it's not working while i write just domain.co.cc/ it's working fine please help me out with www stuff . i have added last two lines so that when user write www.domain.co.cc it will redirect to domain.co.cc but still it not working.

    Read the article

  • FTP Error 550 when trying to access a folder via symbolic link

    - by OrangeTux
    I'm configuring svftp on a linux machine. At the moment local users can login via ftp and they will see listened their home dir. They have write acces to it. No I want the users to write in de /var/www/ dir. Therefore I created an new group apache. Added users to the group and gave the group write access to /var/www. Via the terminal all users can write .var/www/. I created a link in the home directory to /var/www via ln -s /var/www/ /home/user/www ls gives: drwxr-xr-x 2 orangetux orangetux 4096 Jun 23 15:06 ftp lrwxrwxrwx 1 orangetux orangetux 21 Jun 23 15:00 www -> /var/www/ But when I use FTP I see the link but I cannot follow it. Error 550 which means file not found or bad access. How can I solve this, so that the users have access to /var/www via their home dir?

    Read the article

  • Extending partition on linux gparted but not more space in the vm

    - by Asken
    I have a vm test installation of a linux running a build server. Unfortunately I just pressed ok when adding the disk and ended up with an 8gb drive to play with. Well into the test the builds are consuming more and more space, of course. The vm drive was resized to 21gb and using gparted I expanded the drive partitions and that all worked fine but when I go back into the console and do df there's still only 8gb available. How can I claim the other 13gb I added? fdisk -l Disk /dev/sda: 21.0 GB, 20971520000 bytes 255 heads, 63 sectors/track, 2549 cylinders, total 40960000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0006d284 Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 40959999 20229121 5 Extended /dev/sda5 501760 40959999 20229120 8e Linux LVM vgdisplay --- Volume group --- VG Name ct System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 19.29 GiB PE Size 4.00 MiB Total PE 4938 Alloc PE / Size 1977 / 7.72 GiB Free PE / Size 2961 / 11.57 GiB VG UUID MwiMAz-52e1-iGVf-eL4f-P5lq-FvRA-L73Sl3 lvdisplay --- Logical volume --- LV Name /dev/ct/root VG Name ct LV UUID Rfk9fh-kqdM-q7t5-ml6i-EjE8-nMtU-usBF0m LV Write Access read/write LV Status available # open 1 LV Size 5.73 GiB Current LE 1466 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 --- Logical volume --- LV Name /dev/ct/swap_1 VG Name ct LV UUID BLFaa6-1f5T-4MM0-5goV-1aur-nzl9-sNLXIs LV Write Access read/write LV Status available # open 2 LV Size 2.00 GiB Current LE 511 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1

    Read the article

  • performance monitoring

    - by Sunny
    I want to monitor CPU usage, disk read/write usage for a particular process, say ./myprocess. To monitor CPU top command seems to be a nice option and for read and write iotop seems to be a handy one. For example to monitor read/write for every second i use the command iotop -tbod1 | grep "myprocess". My difficulty is I just want only three variables to store, namely read/sec, write/sec, cpu usage/sec. Could you help me with a script that combines the outputs the above said three variables from top and iotop to be stored into a log file? Thanks!

    Read the article

  • How to make mounted external drive writable over SFTP

    - by Brandon
    I have a user that I log in with over SFTP to my Ubuntu box. They have permission to write to the home directory of course. The external hard drive gets mounted to /media folder but I can't write to it over SFTP while logged in with that user. How would I set the permissions to allow me to write to the drive?

    Read the article

  • How do you prepare for death?

    - by klew
    I write programs, run a few websites (I have admin accounts and passwords), write some web services, I have some encrypted data on my computer - and I sometimes ask myself: what will happen to all those projects and data if I accidentally die? Did you prepare yourself for death? Did you make a will (or some kind of e-will)? How to protect innocent people for whom I did some work? Did you write a letter with passwords and put it in envelope in your desk?

    Read the article

  • Getting EOFException while trying to read from SSLSocket

    - by Isac
    Hi, I am developing a SSL client that will do a simple request to a SSL server and wait for the response. The SSL handshake and the writing goes OK but I can't READ data from the socket. I turned on the debug of java.net.ssl and got the following: [..] main, READ: TLSv1 Change Cipher Spec, length = 1 [Raw read]: length = 5 0000: 16 03 01 00 20 .... [Raw read]: length = 32 [..] main, READ: TLSv1 Handshake, length = 32 Padded plaintext after DECRYPTION: len = 32 [..] * Finished verify_data: { 29, 1, 139, 226, 25, 1, 96, 254, 176, 51, 206, 35 } %% Didn't cache non-resumable client session: [Session-1, SSL_RSA_WITH_RC4_128_MD5] [read] MD5 and SHA1 hashes: len = 16 0000: 14 00 00 0C 1D 01 8B E2 19 01 60 FE B0 33 CE 23 ..........`..3.# Padded plaintext before ENCRYPTION: len = 70 [..] a.j.y. main, WRITE: TLSv1 Application Data, length = 70 [Raw write]: length = 75 [..] Padded plaintext before ENCRYPTION: len = 70 [..] main, WRITE: TLSv1 Application Data, length = 70 [Raw write]: length = 75 [..] main, received EOFException: ignored main, called closeInternal(false) main, SEND TLSv1 ALERT: warning, description = close_notify Padded plaintext before ENCRYPTION: len = 18 [..] main, WRITE: TLSv1 Alert, length = 18 [Raw write]: length = 23 [..] main, called close() main, called closeInternal(true) main, called close() main, called closeInternal(true) The [..] are the certificate chain. Here is a code snippet: try { System.setProperty("javax.net.debug","all"); /* * Set up a key manager for client authentication * if asked by the server. Use the implementation's * default TrustStore and secureRandom routines. */ SSLSocketFactory factory = null; try { SSLContext ctx; KeyManagerFactory kmf; KeyStore ks; char[] passphrase = "importkey".toCharArray(); ctx = SSLContext.getInstance("TLS"); kmf = KeyManagerFactory.getInstance("SunX509"); ks = KeyStore.getInstance("JKS"); ks.load(new FileInputStream("keystore.jks"), passphrase); kmf.init(ks, passphrase); ctx.init(kmf.getKeyManagers(), null, null); factory = ctx.getSocketFactory(); } catch (Exception e) { throw new IOException(e.getMessage()); } SSLSocket socket = (SSLSocket)factory.createSocket("server ip", 9999); /* * send http request * * See SSLSocketClient.java for more information about why * there is a forced handshake here when using PrintWriters. */ SSLSession session = socket.getSession(); [build query] byte[] buff = query.toWire(); out.write(buff); out.flush(); InputStream input = socket.getInputStream(); int readBytes = -1; int randomLength = 1024; byte[] buffer = new byte[randomLength]; while((readBytes = input.read(buffer, 0, randomLength)) != -1) { LOG.debug("Read: " + new String(buffer)); } input.close(); socket.close(); } catch (Exception e) { e.printStackTrace(); } I can write multiple times and I don't get any error but the EOFException happens on the first read. Am I doing something wrong with the socket or with the SSL authentication? Thank you.

    Read the article

  • Accessing Oracle 6i and 9i/10g Databases using C#

    - by Mike M
    Hi all, I am making two build files using NAnt. The first aims to automatically compile Oracle 6i forms and reports and the second aims to compile Oracle 9i/10g forms and reports. Within the NAnt task is a C# script which prompts the developer for database credentials (username, password, database) in order to compile the forms and reports. I want to then run these credentials against the relevant database to ensure the credentials entered are correct and, if they are not, prompt the user to re-enter their credentials. My script currently looks as follows: class GetInput { public static void ScriptMain(Project project) { Console.Clear(); Console.WriteLine("==================================================================="); Console.WriteLine("Welcome to the Compile and Deploy Oracle Forms and Reports Facility"); Console.WriteLine("==================================================================="); Console.WriteLine(); Console.WriteLine("Please enter the acronym of the project to work on from the following list:"); Console.WriteLine(); Console.WriteLine("--------"); Console.WriteLine("- BCS"); Console.WriteLine("- COPEN"); Console.WriteLine("- FCDD"); Console.WriteLine("--------"); Console.WriteLine(); Console.Write("Selection: "); project.Properties["project.type"] = Console.ReadLine(); Console.WriteLine(); Console.Write("Please enter username: "); string username = Console.ReadLine(); project.Properties["username"] = username; string password = ReturnPassword(); project.Properties["password"] = password Console.WriteLine(); Console.Write("Please enter database: "); string database = Console.ReadLine(); project.Properties["database"] = database Console.WriteLine(); //Call method to verify user credentials Console.WriteLine(); Console.WriteLine("Compiling files..."; } public static string ReturnPassword() { Console.Write("Please enter password: "); string password = ""; ConsoleKeyInfo nextKey = Console.ReadKey(true); while (nextKey.Key != ConsoleKey.Enter) { if (nextKey.Key == ConsoleKey.Backspace) { if (password.Length > 0) { password = password.Substring(0, password.Length - 1); Console.Write(nextKey.KeyChar); Console.Write(" "); Console.Write(nextKey.KeyChar); } } else { password += nextKey.KeyChar; Console.Write("*"); } nextKey = Console.ReadKey(true); } return password; } } Having done a bit of research, I find that you can connect to Oracle databases using the System.Data.OracleClient namespace clicky. However, as mentioned in the link, Microsoft is discontinuing support for this so it is not a desireable solution. I have also fonud that Oracle provides its own classes for connecting to Oracle databases clicky. However, this only seems to support connecting to Oracle 9 or newer databases (clicky) so it is not feasible solution as I also need to connect to Oracle 6i databases. I could achieve this by calling a bat script from within the C# script, but I would much prefer to have a single build file for simplicity. Ideally, I would like to run a series of commands such as is contained in the following .bat script: rem -- Set Database SID -- set ORACLE_SID=%DBSID% sqlplus -s %nameofuser%/%password%@%dbsid% set cmdsep on set cmdsep '"'; --" set term on set echo off set heading off select '========================================' || CHR(10) || 'Have checked and found both Password and ' || chr(10) || 'Database Identifier are valid, continuing ...' || CHR(10) || '========================================' from dual; exit; This requires me to set the environment variable of ORACLE_SID and then run sqlplus in silent mode (-s) followed by a series of sql set commands (set x), the actual select statement and an exit command. Can I achieve this within a c# script without calling a bat script, or am I forced to call a bat script? Thanks in advance!

    Read the article

  • Public class DiscoLight help

    - by luvthug
    Hi All, If some one can point me in the right direction for this code for my assigment I would really appreciate it. I have pasted the whole code that I need to complete but I need help with the following method public void changeColour(Circle aCircle) which is meant to allow to change the colour of the circle randomly, if 0 comes the light of the circle sgould change to red, 1 for green and 2 for purple. public class DiscoLight { /* instance variables */ private Circle light; // simulates a circular disco light in the Shapes window private Random randomNumberGenerator; /** * Default constructor for objects of class DiscoLight */ public DiscoLight() { super(); this.randomNumberGenerator = new Random(); } /** * Returns a randomly generated int between 0 (inclusive) * and number (exclusive). For example if number is 6, * the method will return one of 0, 1, 2, 3, 4, or 5. */ public int getRandomInt(int number) { return this.randomNumberGenerator.nextInt(number); } /** * student to write code and comment here for setLight(Circle) for Q4(i) */ public void setLight(Circle aCircle) { this.light = aCircle; } /** * student to write code and comment here for getLight() for Q4(i) */ public Circle getLight() { return this.light; } /** * Sets the argument to have a diameter of 50, an xPos * of 122, a yPos of 162 and the colour GREEN. * The method then sets the receiver's instance variable * light, to the argument aCircle. */ public void addLight(Circle aCircle) { //Student to write code here, Q4(ii) this.light = aCircle; this.light.setDiameter(50); this.light.setXPos(122); this.light.setYPos(162); this.light.setColour(OUColour.GREEN); } /** * Randomly sets the colour of the instance variable * light to red, green, or purple. */ public void changeColour(Circle aCircle) { //student to write code here, Q4(iii) if (getRandomInt() == 0) { this.light.setColour(OUColour.RED); } if (this.getRandomInt().equals(1)) { this.light.setColour(OUColour.GREEN); } else if (this.getRandomInt().equals(2)) { this.light.setColour(OUColour.PURPLE); } } /** * Grows the diameter of the circle referenced by the * receiver's instance variable light, to the argument size. * The diameter is incremented in steps of 2, * the xPos and yPos are decremented in steps of 1 until the * diameter reaches the value given by size. * Between each step there is a random colour change. The message * delay(anInt) is used to slow down the graphical interface, as required. */ public void grow(int size) { //student to write code here, Q4(iv) } /** * Shrinks the diameter of the circle referenced by the * receiver's instance variable light, to the argument size. * The diameter is decremented in steps of 2, * the xPos and yPos are incremented in steps of 1 until the * diameter reaches the value given by size. * Between each step there is a random colour change. The message * delay(anInt) is used to slow down the graphical interface, as required. */ public void shrink(int size) { //student to write code here, Q4(v) } /** * Expands the diameter of the light by the amount given by * sizeIncrease (changing colour as it grows). * * The method then contracts the light until it reaches its * original size (changing colour as it shrinks). */ public void lightCycle(int sizeIncrease) { //student to write code here, Q4(vi) } /** * Prompts the user for number of growing and shrinking * cycles. Then prompts the user for the number of units * by which to increase the diameter of light. * Method then performs the requested growing and * shrinking cycles. */ public void runLight() { //student to write code here, Q4(vii) } /** * Causes execution to pause by time number of milliseconds */ private void delay(int time) { try { Thread.sleep(time); } catch (Exception e) { System.out.println(e); } } }

    Read the article

  • Writing the tests for FluentPath

    - by Bertrand Le Roy
    Writing the tests for FluentPath is a challenge. The library is a wrapper around a legacy API (System.IO) that wasn’t designed to be easily testable. If it were more testable, the sensible testing methodology would be to tell System.IO to act against a mock file system, which would enable me to verify that my code is doing the expected file system operations without having to manipulate the actual, physical file system: what we are testing here is FluentPath, not System.IO. Unfortunately, that is not an option as nothing in System.IO enables us to plug a mock file system in. As a consequence, we are left with few options. A few people have suggested me to abstract my calls to System.IO away so that I could tell FluentPath – not System.IO – to use a mock instead of the real thing. That in turn is getting a little silly: FluentPath already is a thin abstraction around System.IO, so layering another abstraction between them would double the test surface while bringing little or no value. I would have to test that new abstraction layer, and that would bring us back to square one. Unless I’m missing something, the only option I have here is to bite the bullet and test against the real file system. Of course, the tests that do that can hardly be called unit tests. They are more integration tests as they don’t only test bits of my code. They really test the successful integration of my code with the underlying System.IO. In order to write such tests, the techniques of BDD work particularly well as they enable you to express scenarios in natural language, from which test code is generated. Integration tests are being better expressed as scenarios orchestrating a few basic behaviors, so this is a nice fit. The Orchard team has been successfully using SpecFlow for integration tests for a while and I thought it was pretty cool so that’s what I decided to use. Consider for example the following scenario: Scenario: Change extension Given a clean test directory When I change the extension of bar\notes.txt to foo Then bar\notes.txt should not exist And bar\notes.foo should exist This is human readable and tells you everything you need to know about what you’re testing, but it is also executable code. What happens when SpecFlow compiles this scenario is that it executes a bunch of regular expressions that identify the known Given (set-up phases), When (actions) and Then (result assertions) to identify the code to run, which is then translated into calls into the appropriate methods. Nothing magical. Here is the code generated by SpecFlow: [NUnit.Framework.TestAttribute()] [NUnit.Framework.DescriptionAttribute("Change extension")] public virtual void ChangeExtension() { TechTalk.SpecFlow.ScenarioInfo scenarioInfo = new TechTalk.SpecFlow.ScenarioInfo("Change extension", ((string[])(null))); #line 6 this.ScenarioSetup(scenarioInfo); #line 7 testRunner.Given("a clean test directory"); #line 8 testRunner.When("I change the extension of " + "bar\\notes.txt to foo"); #line 9 testRunner.Then("bar\\notes.txt should not exist"); #line 10 testRunner.And("bar\\notes.foo should exist"); #line hidden testRunner.CollectScenarioErrors();} The #line directives are there to give clues to the debugger, because yes, you can put breakpoints into a scenario: The way you usually write tests with SpecFlow is that you write the scenario first, let it fail, then write the translation of your Given, When and Then into code if they don’t already exist, which results in running but failing tests, and then you write the code to make your tests pass (you implement the scenario). In the case of FluentPath, I built a simple Given method that builds a simple file hierarchy in a temporary directory that all scenarios are going to work with: [Given("a clean test directory")] public void GivenACleanDirectory() { _path = new Path(SystemIO.Path.GetTempPath()) .CreateSubDirectory("FluentPathSpecs") .MakeCurrent(); _path.GetFileSystemEntries() .Delete(true); _path.CreateFile("foo.txt", "This is a text file named foo."); var bar = _path.CreateSubDirectory("bar"); bar.CreateFile("baz.txt", "bar baz") .SetLastWriteTime(DateTime.Now.AddSeconds(-2)); bar.CreateFile("notes.txt", "This is a text file containing notes."); var barbar = bar.CreateSubDirectory("bar"); barbar.CreateFile("deep.txt", "Deep thoughts"); var sub = _path.CreateSubDirectory("sub"); sub.CreateSubDirectory("subsub"); sub.CreateFile("baz.txt", "sub baz") .SetLastWriteTime(DateTime.Now); sub.CreateFile("binary.bin", new byte[] {0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0xFF}); } Then, to implement the scenario that you can read above, I had to write the following When: [When("I change the extension of (.*) to (.*)")] public void WhenIChangeTheExtension( string path, string newExtension) { var oldPath = Path.Current.Combine(path.Split('\\')); oldPath.Move(p => p.ChangeExtension(newExtension)); } As you can see, the When attribute is specifying the regular expression that will enable the SpecFlow engine to recognize what When method to call and also how to map its parameters. For our scenario, “bar\notes.txt” will get mapped to the path parameter, and “foo” to the newExtension parameter. And of course, the code that verifies the assumptions of the scenario: [Then("(.*) should exist")] public void ThenEntryShouldExist(string path) { Assert.IsTrue(_path.Combine(path.Split('\\')).Exists); } [Then("(.*) should not exist")] public void ThenEntryShouldNotExist(string path) { Assert.IsFalse(_path.Combine(path.Split('\\')).Exists); } These steps should be written with reusability in mind. They are building blocks for your scenarios, not implementation of a specific scenario. Think small and fine-grained. In the case of the above steps, I could reuse each of those steps in other scenarios. Those tests are easy to write and easier to read, which means that they also constitute a form of documentation. Oh, and SpecFlow is just one way to do this. Rob wrote a long time ago about this sort of thing (but using a different framework) and I highly recommend this post if I somehow managed to pique your interest: http://blog.wekeroad.com/blog/make-bdd-your-bff-2/ And this screencast (Rob always makes excellent screencasts): http://blog.wekeroad.com/mvc-storefront/kona-3/ (click the “Download it here” link)

    Read the article

  • Using Apache FOP from .NET level

    - by Lukasz Kurylo
    In one of my previous posts I was talking about FO.NET which I was using to generate a pdf documents from XSL-FO. FO.NET is one of the .NET ports of Apache FOP. Unfortunatelly it is no longer maintained. I known it when I decidec to use it, because there is a lack of available (free) choices for .NET to render a pdf form XSL-FO. I hoped in this implementation I will find all I need to create a pdf file with my really simple requirements. FO.NET is a port from some old version of Apache FOP and I found really quickly that there is a lack of some features that I needed, like dotted borders, double borders or support for margins. So I started to looking for some alternatives. I didn’t try the NFOP, another port of Apache FOP, because I found something I think much more better, the IKVM.NET project.   IKVM.NET it is not a pdf renderer. So what it is? From the project site:   IKVM.NET is an implementation of Java for Mono and the Microsoft .NET Framework. It includes the following components: a Java Virtual Machine implemented in .NET a .NET implementation of the Java class libraries tools that enable Java and .NET interoperability   In the simplest form IKVM.NET allows to use a Java code library in the C# code and vice versa.   I tried to use an Apache FOP, the best I think open source pdf –> XSL-FO renderer written in Java from my project written in C# using an IKVM.NET and it work like a charm. In the rest of the post I want to show, how to prepare a .NET *.dll class library from Apache FOP *.jar’s with IKVM.NET and generate a simple Hello world pdf document.   To start playing with IKVM.NET and Apache FOP we need to download their packages: IKVM.NET Apache FOP and then unpack them.   From the FOP directory copy all the *.jar’s files from lib and build catalogs to some location, e.g. d:\fop. Second step is to build the *.dll library from these files. On the console execute the following comand:   ikvmc –target:library –out:d:\fop\fop.dll –recurse:d:\fop   The ikvmc is located in the bin subdirectory where you unpacked the IKVM.NET. You must execute this command from this catalog, add this path to the global variable PATH or specify the full path to the bin subdirectory.   In no error occurred during this process, the fop.dll library should be created. Right now we can create a simple project to test if we can create a pdf file.   So let’s create a simple console project application and add reference to the fop.dll and the IKVM dll’s: IKVM.OpenJDK.Core and IKVM.OpenJDK.XML.API.   Full code to generate a pdf file from XSL-FO template:   static void Main(string[] args)         {             //initialize the Apache FOP             FopFactory fopFactory = FopFactory.newInstance();               //in this stream we will get the generated pdf file             OutputStream o = new DotNetOutputMemoryStream();             try             {                 Fop fop = fopFactory.newFop("application/pdf", o);                 TransformerFactory factory = TransformerFactory.newInstance();                 Transformer transformer = factory.newTransformer();                   //read the template from disc                 Source src = new StreamSource(new File("HelloWorld.fo"));                 Result res = new SAXResult(fop.getDefaultHandler());                 transformer.transform(src, res);             }             finally             {                 o.close();             }             using (System.IO.FileStream fs = System.IO.File.Create("HelloWorld.pdf"))             {                 //write from the .NET MemoryStream stream to disc the generated pdf file                 var data = ((DotNetOutputMemoryStream)o).Stream.GetBuffer();                 fs.Write(data, 0, data.Length);             }             Process.Start("HelloWorld.pdf");             System.Console.ReadLine();         }   Apache FOP be default using a Java’s Xalan to work with XML files. I didn’t find a way to replace this piece of code with equivalent from .NET standard library. If any error or warning will occure during generating the pdf file, on the console will ge shown, that’s why I inserted the last line in the sample above. The DotNetOutputMemoryStream this is my wrapper for the Java OutputStream. I have created it to have the possibility to exchange data between the .NET <-> Java objects. It’s implementation:   class DotNetOutputMemoryStream : OutputStream     {         private System.IO.MemoryStream ms = new System.IO.MemoryStream();         public System.IO.MemoryStream Stream         {             get             {                 return ms;             }         }         public override void write(int i)         {             ms.WriteByte((byte)i);         }         public override void write(byte[] b, int off, int len)         {             ms.Write(b, off, len);         }         public override void write(byte[] b)         {             ms.Write(b, 0, b.Length);         }         public override void close()         {             ms.Close();         }         public override void flush()         {             ms.Flush();         }     } The last thing we need, this is the HelloWorld.fo template.   <?xml version="1.0" encoding="utf-8"?> <fo:root xmlns:fo="http://www.w3.org/1999/XSL/Format"          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">   <fo:layout-master-set>     <fo:simple-page-master master-name="simple"                   page-height="29.7cm"                   page-width="21cm"                   margin-top="1.8cm"                   margin-bottom="0.8cm"                   margin-left="1.6cm"                   margin-right="1.2cm">       <fo:region-body margin-top="3cm"/>       <fo:region-before extent="3cm"/>       <fo:region-after extent="1.5cm"/>     </fo:simple-page-master>   </fo:layout-master-set>   <fo:page-sequence master-reference="simple">     <fo:flow flow-name="xsl-region-body">       <fo:block font-size="18pt" color="black" text-align="center">         Hello, World!       </fo:block>     </fo:flow>   </fo:page-sequence> </fo:root>   I’m not going to explain how how this template is created, because this will be covered in the near future posts.   Generated pdf file should look that:

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >