Search Results

Search found 3177 results on 128 pages for 'smart phones'.

Page 114/128 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Troubleshooting major performance issue: Is culprit Intel RST, Hard drive, or something else?

    - by Sean Killeen
    The Setup I have the following components that come into play in this situation: ASUS P8Z68 V/PRO motherboard a RAID1 configuration (1x 1TB drive, 1 x 2TB drive -- I explain below), accelerated with an SSD using Intel's RST software, and 1 TB drive standing by as a spare. Core i7 2600k 32 GB RAM Windows 8.1 This box was designed to be beast, and until just recently, was very good at being just that. What's Happening The system has slowed to a crawl whenever it touches the disk. Things appear to work at normal speed when dealing with memory. For example, typing this is fine, but saving it to disk from notepad gave me a 5-7 second pause when clicking save. The disks appear to be at 100% all the time (e.g. the light on the disk access on the PC is solidly on -- not even any flashing) In ProcExp it appears that the disk is barely being utilized at all: Intel RST reports that everything is fine: Other Details Prior to this happening, RST had reported that my drives were failing (one went bad, one was throwing SMART events). This made sense; they were at the tail end of their warranty and the PC is on almost all the time. I RMA'd the drives via Seagate. In the meantime, I'd purchased a 2TB drive because I didn't realize that the 1TB drives were under warranty. I figured I'd replace the other 1 TB drive with another 2 TB when it died but then discovered the warranty. AFAIK, I haven't done any major updates since 8.1 and it worked fine after those. Question(s) How can I troubleshoot this? What is the best way to try to figure out why disks are being maxed out despite the OS reporting barely any disk usage and that everything is OK? Given the failures, etc. that I describe above, is it possible that the problem could be the I/O on the motherboard itself? If so, how would I even be able to diagnose it? I'm betting the drives that Seagate gave me are refurbished (didn't think to look; that's dumb). Is it possible that the same model drive, refurbished, could somehow cause this? In terms of how RAID1 works, is it possible that one drive is "falling behind" somehow, and that the RAID1 is constantly trying to fix the mirroring? If so, this seems like Intel RST would report on it, but I wanted to consider it as an option.

    Read the article

  • Android Market: Application not visible on some Devices

    - by Andreas
    Hello, i have written an application that needs to process outgoing calls. Everything works fine, the application has already a few hundred downloads, but now i get feedback from people who would like to download it, yet cannot find it. I have done some tests and have found that the permission "PROCESS_OUTGOING_CALLS" seems to be responsible for this. If i include it in an app, people with branded phones (at least in Germany) cannot find it, as soon as i remove this permission, everything is fine (when i re-insert it again, the app vanishes again) The weird thing is, that those users can see other apps which use this permission in the market. I have compared my manifest file to outputs from other manifest files and cannot understand why it doesn't work. Here is the manifest file for a test application i wrote to test the problem: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.eventkontor.marketavailabilitytest" android:versionName="1.2" android:versionCode="3" android:installLocation="auto"> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".showMain" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-sdk android:minSdkVersion="3" android:targetSdkVersion="4" /> <uses-permission android:name="android.permission.INTERNET"></uses-permission> <uses-permission android:name="android.permission.PROCESS_OUTGOING_CALLS"></uses-permission> <uses-permission android:name="android.permission.READ_CONTACTS"></uses-permission> <uses-permission android:name="android.permission.VIBRATE"></uses-permission> <uses-permission android:name="android.permission.READ_PHONE_STATE"></uses-permission> <uses-permission android:name="android.permission.ACCESS_WIFI_STATE"></uses-permission> <supports-screens android:normalScreens="true" android:resizeable="true" android:largeScreens="true" android:smallScreens="false"></supports-screens> </manifest> Does anyone have an idea what i'm doing wrong?

    Read the article

  • changing image on listview at runtime in android

    - by Raj
    Hi, I am using a LinearLayout to display some Text and image. I have the images at drawable/ and i am implimenting this with ListActivity with some onListItemClick functionality. now i wants to change the image for the rows which are processed by onclick functionality to show the status as processed. can some one help me in this issue to change the image at runtime. the following is my implimentation. public class ListWithImage extends ListActivity { /** Called when the activity is first created. */ private SimpleCursorAdapter myAdapter; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // raj setContentView(R.layout.main); Cursor cursor = getContentResolver().query(People.CONTENT_URI, null, null, null, null); startManagingCursor(cursor); String[] columns = new String[] {People.NAME, People.NUMBER}; int[] names = new int[] {R.id.contact_name, R.id.contact_number}; myAdapter = new SimpleCursorAdapter(this, R.layout.main, cursor, columns, names); setListAdapter(myAdapter); } @Override protected void onListItemClick(ListView listView, View view, int position, long id) { super.onListItemClick(listView, view, position, id); Intent intent = new Intent(Intent.ACTION_CALL); Cursor cursor = (Cursor) myAdapter.getItem(position); long phoneId = cursor.getLong(cursor.getColumnIndex(People.PRIMARY_PHONE_ID)); intent.setData(ContentUris.withAppendedId(Phones.CONTENT_URI, phoneId)); startActivity(intent); } } and main.xml is : <LinearLayout android:layout_height="wrap_content" android:orientation="vertical" android:layout_width="250px"> <LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:orientation="horizontal"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Name: " /> <TextView android:id="@+id/contact_name" android:layout_width="wrap_content" android:layout_height="wrap_content" /> </LinearLayout> <LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:orientation="horizontal"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Phone: " /> <TextView android:id="@+id/contact_number" android:layout_width="wrap_content" android:layout_height="wrap_content" /> </LinearLayout>

    Read the article

  • Suggestions for designing large-scale Java webapp from the group up

    - by Chris Thompson
    Hi all, I'm about to start developing a large-scale system and I'm struggling with which direction to proceed. I've done plenty of Java web apps before and I have plenty of experience with servlet containers and GWT and some experience with Spring. The problem is most of my webapps have been thrown together just to be a proof of concept and what I'm struggling with is what set of frameworks to use. I need to have both a browser based application as well as a web service designed to support access from mobile devices (Android and iPhone for now). Ideally, I'd like to design this system in such a way that I don't end up rewriting all of my servlets for each client (browser and phone) although I don't mind having some small checks in there to properly format the data. In addition, although I'm the only developer now, that won't necessarily be the case down the road and I'd like to design something that scales well both with regards to traffic and number of developers (isn't just a nightmare to maintain). So where I am now is planning on using GWT to design the browser-based interface but I'm struggling with how to reuse that code with to present the interface (most likely xml) for the mobile devices. Using GWT RPC would, I think, make it relatively easy to do all of the AJAX in the browser, but might make generating xml for the mobile phones difficult. In addition, I like the idea of using something like Hibernate for persistence and Spring Security to secure the whole thing. Again, I'm not sure how well those will cooperate with GWT (I think Hibernate should be fine...) There's obviously a lot more to this than I've presented here, but I've tried to give you the 5-minute overview. I'm a bit stumped and was wondering if anybody in the community had any experience starting from this place. Does what I'm trying to do make sense? Is it realistic? I have no doubt I can make all of these frameworks speak the same language, I'm just wondering if it's worth my time to fight with them. Also, am I missing a framework that would be really beneficial? Thanks in advance and sorry for the relatively broad question... Chris

    Read the article

  • Small Business Server services will not start, and remote desktop and UAC are broken

    - by Stephen Jennings
    Yesterday I began setting up a server with Windows Small Business Server 2008. All I am configuring it for right now is to be a domain controller and Exchange server. I completed the initial setup of SBS then started looking through different connection options (allowing VPN versus using a TS Gateway). After I rebooted one time, I started having three not-obviously-related issues: First, I could no longer remote desktop into the computer. I ran TCPView and saw that it was no longer listening on port 3389. I checked everything in Terminal Service Configuration but everything shows the computer ought to be allowing connections. Also, when I tried to use anything that required user account control elevation, the UAC dialog never popped up and the program that was waiting just froze. If I try to run "regedit" from the Run box, for example, it never appears. When I run in safe mode which does not run with UAC, I was able to access everything. I didn't want to deal with it, so I turned off UAC and rebooted. Finally, in the Windows SBS Console, there are status indicators for Security, Updates, Backup, and Other Alerts. The first three get stuck saying "Querying". Looking in the computer alerts, I have events showing the following services stopped: Background Intelligent Transfer Service KtmRm for Distributed Transaction Coordinator Distributed Transaction Coordinator Microsoft Exchange Information Store Microsoft Exchange System Attendant Microsoft Exchange Transport Windows Remote Management Update Services Windows Update I figured I must have configured something wrong accidentally and I couldn't find anything using Google explaining what might be the case, so I just decided to format the hard drive and reinstall SBS from scratch. I did this and everything was working last night, but I just turned the machine back on and it is doing the same thing again! On my second install, I did not configure anything except the following (all from SBS Console): Connect to the Internet (set IP and router address) Turn off customer feedback. Set up internet address. Decline to use a Smart Host for email. Added one standard user account. Since this happened again and I was very careful the second time not to configure anything outside of the SBS Console, I feel like there's something else going on. Right now the machine is on an isolated network that does have internet access. My desktop is the only other machine plugged into this network. Any and all help is appreciated (before I tear my hair out!)

    Read the article

  • Ubuntu box static routing problem

    - by Rafael
    Hello, I'm trying to configure a ubuntu server to be a router. This is my interface configuration (eth2 connects to my WAN, eth0 to my LAN): auto eth2 iface eth2 inet static address 192.168.0.249 netmask 255.255.255.0 gateway 192.168.0.1 broadcast 192.168.0.255 auto eth0 iface eth0 inet static address 192.168.100.1 netmask 255.255.255.0 This is the router information: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 eth2 And this is dhcp configuration: subnet 192.168.100.0 netmask 255.255.255.0 { range 192.168.100.101 192.168.100.254; option domain-name-servers 201.70.86.133; option routers 192.168.100.1; authoritative; } I'm then connecting a mac os x by cable on eth0. This is en0 interface configuration: en0: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether 00:26:bb:5d:82:b0 inet6 fe80::226:bbff:fe5d:82b0%en0 prefixlen 64 scopeid 0x4 inet 192.168.100.101 netmask 0xffffff00 broadcast 192.168.100.255 media: autoselect (100baseTX <full-duplex>) status: active And this is the routing table: Internet: Destination Gateway Flags Refs Use Netif Expire default 192.168.100.1 UGSc 139 32 en0 10.37.129/24 link#8 UC 2 0 vnic1 10.37.129.2 0:1c:42:0:0:9 UHLWI 0 839 lo0 10.37.129.255 ff:ff:ff:ff:ff:ff UHLWbI 0 4 vnic1 10.211.55/24 link#7 UC 2 0 vnic0 10.211.55.2 0:1c:42:0:0:8 UHLWI 0 840 lo0 10.211.55.255 ff:ff:ff:ff:ff:ff UHLWbI 0 4 vnic0 127 127.0.0.1 UCS 0 0 lo0 127.0.0.1 127.0.0.1 UH 3 507924 lo0 169.254 link#4 UCS 0 0 en0 172.16.42/24 link#10 UC 2 0 vmnet8 172.16.42.1 0:50:56:c0:0:8 UHLWI 0 839 lo0 172.16.42.255 link#10 UHLWbI 1 24 vmnet8 192.168.100 link#4 UC 2 0 en0 192.168.100.1 0:e0:7c:7e:f:99 UHLWI 139 0 en0 777 192.168.100.101 127.0.0.1 UHS 0 0 lo0 192.168.100.255 ff:ff:ff:ff:ff:ff UHLWbI 0 4 en0 192.168.116 link#9 UC 2 0 vmnet1 192.168.116.1 0:50:56:c0:0:1 UHLWI 0 839 lo0 192.168.116.255 ff:ff:ff:ff:ff:ff UHLWbI 0 4 vmnet1 When I ping 192.168.100.1, it works. When I ping 192.168.0.249, it also works. However, when I try to ping 192.168.0.1 it does not. Does anyone has any way to solve this? Is there a way to debug it? Thanks,

    Read the article

  • How might I stop BACKSCATTER using Qmail?

    - by alecb
    New to ServerFault , please pardon if my details are too much Linux box acting as Virtual Host for domain hosting. Runs CentOs. Runs Parallels Plesk 9.x Regardless of the following, the SPAM keeps flowing in at 1-3 / second. An explanation of the problem... "xinetd service listens for SMTP connections and forwards to qmail-smtpd. The qmail service only process the queue, but does not control messages coming into the queue...that's why stopping it has no effect. If you stop xinetd AND qmail, then kill any open qmail-smtpd processes, all mail flow comes to a stop SOMETIMES Problem is, qmail-smtpd is not smart enough to check for valid mailboxes on the localhost before accepting the mail. So, it accepts bad mail with a forged replyto address which gets processed in the queue by qmail. Qmail cannot deliver locally and bounces to the forged replyto address." We believe the fix is to patch the qmail-smtpd process to give it the intelligence to check for the existence of local mailboxes BEFORE accepting the message. The problem is when we try to compile the chkuser patch we run into failures due to Plesk Control Panel." Is anyone aware of something we could do differently or better?" Other things that have NOT worked thus far: -Turning off any and all mail processes (to check as an indicator that an individual account has been compromised. This has been verified as NOT the case.) -Turning off mail AND http server processes (in the case of a compromised formmail) -Running EXIM in lieu of Qmail( easy/quick install but xinetd forces exim to close and restarts qmail on its own) -Turned on SPF protection via Plesk GUI. Does not help. -Turned on Greylisting via Plesk GUI. Does not help. -Disabled Bounce notifications via command line That which MIGHT work but have complications: -Use POSTFIX instead of QMAIL (No knowledge of POSTIFX and don't want to bother with it unless anyone knows it has potential to handle backscatter WELL before investing time) -As mentioned above, compiling a chkusr patch, we believe will STOP this problem, along with qmail (because of plesk in the mix, the comile fails every time and Parallels Plesk support is unresponsive unless I cough up MONEY) If I don't clear out the SPAM from the outgoing mail queue nightly, then it clogs up with millions of SPAMs and will bring down the OUTGOING email services. Any and all help welcome and appreciated!

    Read the article

  • How do I combine static and dynamic DHCP leases on a Cisco router?

    - by Brad
    Basically, what I need is super similar to the unanswered cisco forum question below: https://supportforums.cisco.com/message/3139749#3139749 I have a Cisco 850 Series router. I have configured a DHCP pool for the 10.0.0.0/24 network. I have excluded 10.0.0.1 - 10.0.0.99 from the DHCP pool. I want to add a static DHCP pool for stuff and I want DHCP to statically assign them the addresses of my choice below 100. Actually, I don't care what addresses I statically assign. They can be anything in the pool for all I care, I just want it to work. Why are you doing this? Just statically assign the IPs on the devices! I don't want to do this because I have some laptop users. They could obviously only use that static IP here. This isn't a problem if they could be bothered to change any location setting or something. They can't. So it HAS to be DHCP. It also has to be static IPs because I need to forward ports to them. I know, I know, this is weird but it's an apartment LAN/WLAN so this isn't exactly a typical use case. Relevant sections of config below: ip dhcp excluded-address 10.0.0.1 10.0.0.99 ! ip dhcp pool Internal-net import all network 10.0.0.0 255.255.255.0 default-router 10.0.0.1 domain-name 1770.local lease 7 ! ip dhcp pool static-pool import all origin file flash://staticmap default-router 10.0.0.1 domain-name 1770.local Contents of staticmap: *time* Aug 5 2010 09:00 AM *version* 2 !IP address Type Hardware address Lease expiration 10.0.0.100/24 1 001f.5b3e.d50a Infinite *end* You can see here I was trying addresses outside the excluded-address range to see if that would make any difference. My testing machine's MAC: mainframe:~ brad$ ifconfig en1 en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:1f:5b:3e:d5:0a What shows up in the DHCP binding table: basestar#show ip dhcp binding Bindings from all pools not associated with VRF: IP address Client-ID/ Lease expiration Type Hardware address/ User name 10.0.0.112 0100.1f5b.3ed5.0a Aug 12 2010 10:06 AM Automatic What's up with the funny looking MAC in the DHCP binding table?? Is what I'm trying to accomplish basically impossible? Am I going about this the wrong way? All I want to to be able to port forward some ports to specific devices. The way I would do this with a consumer router is to do what I'm trying to do here; assign static DHCP to those devices then configure PAT for ports on those addresses.

    Read the article

  • Passenger 2.2.4, nginx 0.7.61 and SSL

    - by boompa
    Has anyone had any luck configuring Passenger and nginx with SSL? I've spent hours trying to get this configuration working as I'd like, using what few resources there are out there on the net, and I can't get any of the supposedly forwarded headers to show up in the Rails controller. For example, with a conf file of the following (and multiple variations thereof): server { listen 3000; server_name .example.com; root /Users/website/public; passenger_enabled on; rails_env development; } server { listen 3443; root /Users/website/public; rails_env development; passenger_enabled on; ssl on; #ssl_verify_client on; ssl_certificate /Users/website/ssl/server.crt; ssl_certificate_key /Users/website/ssl/server.key; #ssl_client_certificate /Users/website/ssl/CA.crt; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-SSL-Subject $ssl_client_s_dn; #proxy_set_header X-SSL-Issuer $ssl_client_i_dn; proxy_redirect off; proxy_max_temp_file_size 0; } and Rails code in the controller like this: request.headers.each { |k, v| RAILS_DEFAULT_LOGGER.error "Header #{k} Val #{v}" } other headers appear, but not those set in nginx, e.g.: Header rack.multithread Val false Header REQUEST_URI Val /login/new Header REMOTE_PORT Val 64021 Header rack.multiprocess Val true Header PASSENGER_USE_GLOBAL_QUEUE Val false Header PASSENGER_APP_TYPE Val rails Header SCGI Val 1 Header SERVER_PORT Val 3443 Header HTTP_ACCEPT_CHARSET Val ISO-8859-1,utf-8;q=0.7,*;q=0.7 Header rack.request.query_hash Val Header DOCUMENT_ROOT Val /Users/website/public I've even gone so far as to modify Passenger's abstract_request_handler's main_loop method, i.e., headers, input = parse_request(client) if headers if headers[REQUEST_METHOD] == PING process_ping(headers, input, client) else headers.each { |h,v| log.unknown "abstract_request_handler: #{h} = #{v}" } process_request(headers, input, client) end end only to find that the supposedly added headers do not exist there either: abstract_request_handler: HTTP_KEEP_ALIVE = 300 abstract_request_handler: HTTP_USER_AGENT = Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1) Gecko/20090624 Firefox/3.5 abstract_request_handler: PASSENGER_SPAWN_METHOD = smart-lv2 abstract_request_handler: CONTENT_LENGTH = 0 abstract_request_handler: HTTP_IF_NONE_MATCH = "b6e8b9afbc1110ee3bf0c87e119252ad" abstract_request_handler: HTTP_ACCEPT_LANGUAGE = en-us,en;q=0.5 abstract_request_handler: SERVER_PROTOCOL = HTTP/1.1 abstract_request_handler: HTTPS = on abstract_request_handler: REMOTE_ADDR = 127.0.0.1 abstract_request_handler: SERVER_SOFTWARE = nginx/0.7.61 abstract_request_handler: SERVER_ADDR = 127.0.0.1 abstract_request_handler: SCRIPT_NAME = abstract_request_handler: PASSENGER_ENVIRONMENT = development abstract_request_handler: REMOTE_PORT = 64021 abstract_request_handler: REQUEST_URI = /login/new abstract_request_handler: HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.7 abstract_request_handler: SERVER_PORT = 3443 abstract_request_handler: SCGI = 1 abstract_request_handler: PASSENGER_APP_TYPE = rails abstract_request_handler: PASSENGER_USE_GLOBAL_QUEUE = false I'm tired of banging my head against the wall, so I'd truly appreciate any help I can get!

    Read the article

  • External USB HD issues with a twist (works on Windows7 but not XP)

    - by Eruditass
    I have this older external USB HD, 160 GB. I was using it to copy my Steam games to another computer. On the source computer, Windows 7 64-bit, everything worked fine. Drive reported no errors, had no hiccups, etc. Plugging it into the Windows XP 32-bit computer, it worked fine for looking through the files, moving files around on it (no real reading/writing, just modifying the filesystem table). However, when copying files from it to my internal HD, after a couple seconds to tens of minutes (seemingly random times), the USB device becomes unrecognized and it reports a delayed write error. Events in system log go like this, chronologically: (number times displayed)xSource (Event ID): "message" 2xdisk (51): An error was detected on device \Device\Harddisk1\D during a paging operation. 1xftdisk (57): The system failed to flush data to the transaction log. Corruption may occur. 1xApplication popup (26): Windows - Delayed Write Failed : Windows was unable to save all the data for the file E:\$Mft. The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere. 1xntfs (50): {Delayed Write Failed} Windows was unable to save all the data for the file . The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere. These repeat for a while, then there is 10+ disk messages or ftdisk messages. Other notes: This occurs on random files at random times. This problem cannot be replicated on the Windows 7 source machine when copying from the HD to a different location on its local disk chkdsk /f was run and found no errors. chkdsk /f/r has the delayed write issue. drive was set to quick removal. Setting to performance in device manager yielded same result I am not writing anything to the USB external drive, so I am not sure why there is even a delayed write error (writing file access times?) local Windows XP was chkdsk'd without problems Windows XP machine has no problems with other USB HD's Various USB ports were attempted Rebooting did not help Occurs with SyncToy as well as windows explorer SMART status is good on both local drive and the external one Lack of gaming is making me cranky

    Read the article

  • Suggestions for designing large-scale Java webapp from the ground up

    - by Chris Thompson
    Hi all, I'm about to start developing a large-scale system and I'm struggling with which direction to proceed. I've done plenty of Java web apps before and I have plenty of experience with servlet containers and GWT and some experience with Spring. The problem is most of my webapps have been thrown together just to be a proof of concept and what I'm struggling with is what set of frameworks to use. I need to have both a browser based application as well as a web service designed to support access from mobile devices (Android and iPhone for now). Ideally, I'd like to design this system in such a way that I don't end up rewriting all of my servlets for each client (browser and phone) although I don't mind having some small checks in there to properly format the data. In addition, although I'm the only developer now, that won't necessarily be the case down the road and I'd like to design something that scales well both with regards to traffic and number of developers (isn't just a nightmare to maintain). So where I am now is planning on using GWT to design the browser-based interface but I'm struggling with how to reuse that code with to present the interface (most likely xml) for the mobile devices. Using GWT RPC would, I think, make it relatively easy to do all of the AJAX in the browser, but might make generating xml for the mobile phones difficult. In addition, I like the idea of using something like Hibernate for persistence and Spring Security to secure the whole thing. Again, I'm not sure how well those will cooperate with GWT (I think Hibernate should be fine...) There's obviously a lot more to this than I've presented here, but I've tried to give you the 5-minute overview. I'm a bit stumped and was wondering if anybody in the community had any experience starting from this place. Does what I'm trying to do make sense? Is it realistic? I have no doubt I can make all of these frameworks speak the same language, I'm just wondering if it's worth my time to fight with them. Also, am I missing a framework that would be really beneficial? Thanks in advance and sorry for the relatively broad question... Chris

    Read the article

  • High I/O latency with software RAID, LUKS encrypted and LVM partitioned KVM setup

    - by aef
    I found out a performance problems with a Mumble server, which I described in a previous question are caused by an I/O latency problem of unknown origin. As I have no idea what is causing this and how to further debug it, I'm asking for your ideas on the topic. I'm running a Hetzner EX4S root server as KVM hypervisor. The server is running Debian Wheezy Beta 4 and KVM virtualisation is utilized through LibVirt. The server has two different 3TB hard drives as one of the hard drives was replaced after S.M.A.R.T. errors were reported. The first hard disk is a Seagate Barracuda XT ST33000651AS (512 bytes logical, 4096 bytes physical sector size), the other one a Seagate Barracuda 7200.14 (AF) ST3000DM001-9YN166 (512 bytes logical and physical sector size). There are two Linux software RAID1 devices. One for the unencrypted boot partition and one as container for the encrypted rest, using both hard drives. Inside the latter RAID device lies an AES encrypted LUKS container. Inside the LUKS container there is a LVM physical volume. The hypervisor's VFS is split on three logical volumes on the described LVM physical volume: one for /, one for /home and one for swap. Here is a diagram of the block device configuration stack: sda (Physical HDD) - md0 (RAID1) - md1 (RAID1) sdb (Physical HDD) - md0 (RAID1) - md1 (RAID1) md0 (Boot RAID) - ext4 (/boot) md1 (Data RAID) - LUKS container - LVM Physical volume - LVM volume hypervisor-root - LVM volume hypervisor-home - LVM volume hypervisor-swap - … (Virtual machine volumes) The guest systems (virtual machines) are mostly running Debian Wheezy Beta 4 too. We have one additional Ubuntu Precise instance. They get their block devices from the LVM physical volume, too. The volumes are accessed through Virtio drivers in native writethrough mode. The IO scheduler (elevator) on both the hypervisor and the guest system is set to deadline instead of the default cfs as that happened to be the most performant setup according to our bonnie++ test series. The I/O latency problem is experienced not only inside the guest systems but is also affecting services running on the hypervisor system itself. The setup seems complex, but I'm sure that not the basic structure causes the latency problems, as my previous server ran four years with almost the same basic setup, without any of the performance problems. On the old setup the following things were different: Debian Lenny was the OS for both hypervisor and almost all guests Xen software virtualisation (therefore no Virtio, also) no LibVirt management Different hard drives, each 1.5TB in size (one of them was a Seagate Barracuda 7200.11 ST31500341AS, the other one I can't tell anymore) We had no IPv6 connectivity Neither in the hypervisor nor in guests we had noticable I/O latency problems According the the datasheets, the current hard drives and the one of the old machine have an average latency of 4.12ms.

    Read the article

  • Best way to handle PHP sessions across Apache vhost wildcard domains

    - by joshholat
    I'm currently running a site that allows users to use custom domains (i.e. so instead of mysite.com/myaccount, they could have myaccount.com). They just change the A record of their domain and we then use a wildcard vhost on Apache to catch the requests from the custom domains. The setup is basically as seen below. The first vhost catches the mysite.com/myaccount requests and the second would be used for myaccount.com. As you can see, they have the exact same path and php cookie_domain. I've noticed some weird behavior surrounding the line below "#The line below me". When active, the custom domains get a new session_id every page load (that isn't the same as the non-custom domain session). However, when I comment that line out, the user keeps the same session_id on each page load, but that session_id is not the same as the one they'd see on a non-custom domain site either despite being completely on the same server. There is a sort of "hack" workaround involving redirecting the user to mysite.com/myaccount, getting the session ID, redirecting back to myaccount.com, and then using that ID on the myaccount.com. But that can get kind of messy (i.e. if the user logs out of mysite.com/myaccount, how does myaccount.com know?). For what it's worth, I'm using a database to manage the sessions (i.e. so there's no issues with being on different servers, etc, but that's irrelevant since we only use one server to handle all requests currently anyways). I'm fairly certain it is related to some sort of CSRF browser protection thing, but shouldn't it be smart enough to know it's on the same server? Note: These are subdomains, they're separate domains entirely (but on the same server). <VirtualHost *:80> DocumentRoot "/opt/local/www/mysite.com" ServerName mysite.local ErrorLog "/opt/local/apache2/logs/mysite.com-error.log" CustomLog "/opt/local/apache2/logs/mysite.com-access.log" common <Directory "/opt/local/www/mysite.com"> AllowOverride All #php_value session.save_path "/opt/local/www/mysite.com/sessions" php_value session.cookie_domain "mysite.local" php_value auto_prepend_file "/opt/local/www/mysite.com/core.php" </Directory> </VirtualHost> #Wildcard (custom domain) vhost <VirtualHost *:80> DocumentRoot "/opt/local/www/mysite.com" ServerName default ServerAlias * ErrorLog "/opt/local/apache2/logs/mysite.com-error.log" CustomLog "/opt/local/apache2/logs/mysite.com-access.log" common <Directory "/opt/local/www/mysite.com"> AllowOverride All #php_value session.save_path "/opt/local/www/mysite.com/sessions" # The line below me php_value session.cookie_domain "mysite.local" php_value auto_prepend_file "/opt/local/www/mysite.com/core.php" </Directory> </VirtualHost>

    Read the article

  • server 2008 r2 - wbadmin systemstatebackup - system writer not found in the backup

    - by TWood
    I am trying to manually run a systemstatebackup command on my server 2008 r2 box and I am getting an error code '2155347997' when I view the backup event log details. The command line tells me that I have log files written to the c:\windows\logs\windowsserverbackup\ path but I have no files of the .log type there. My command window tells me "System Writer is not found in the backup". However when I run vssadmin list writers I find System Writer in the list and it shows normal status with no last errors stored. I am running this from an elevated command prompt as well as from a logged on administrator account. My backup target path has permission for network service to have full control and it has plenty of free space. Looking in eventlog I have two VSS error 8194 that happen immediately before the Backup error 517 which has the errorcode 2155347997 listed. All three of these errors are a result of trying to run the command for the systemstatebackup. It's my belief that some VSS related permission is failing and exiting the backup process before it ever gets started. Because of this the initial code that creates the log files must not be running and this is why I have no files. When running the systemstatebackup command from the command prompt and watching the windowsserverbackup directory I do see that I have a Wbadmin.0.etl file which gets created but it is deleted when the backup errors out and stops. I have looked online and there are numerous opinions as to the cause of this error. These are the things I have corrected to try and fix this issue before posting here: Machine runs a HP 1410i smart array controller but at one time also used a LSI scsi card. Used networkadminkb.com's kb# a467 to find one LSI_SCSI entry in HKLMSysCurrentControlSetServices which start was set to 0x0 and I modified to 0x3. No changes. In HKLMSystemCurrentControlSetServicesVSSDiag I gave network service full control where it previously only had "Special Permission". No changes. I followed KB2009272 to manually try to fix system writer. These are all of the things I have tried. What else should I look at to resolve this issue? It may be important to note that I run Mozy Pro on this server and that was known in the past to use VSS for copying operations and it occasionally threw an error. However since an update last year those error event log entries have stopped.

    Read the article

  • Sockets receiving null (Android)

    - by Henrik
    I have a android app that is communicating with a server (written in java). Between these two parts I have established a Socket connection and want to send data. The problem I am having is that sometimes, for some users, the information that reaches the server is null. This works (for all phones, all users): Server: int a = Integer.parseInt(in.readLine()); int b = Integer.parseInt(in.readLine()); int c = Integer.parseInt(in.readLine()); int d = Integer.parseInt(in.readLine()); String checksum = in.readLine(); String model = in.readLine(); String device = in.readLine(); String name = in.readLine(); Client: out.println(a); out.println(b); out.println(c); out.println(d); out.println(hash); out.println(Build.MODEL); out.println(Build.DEVICE); String name = fixName(); out.print(name); out.flush(); This does not work (for some users): Server: int a = Integer.parseInt(in.readLine()); String checksum = in.readLine(); String model = in.readLine(); String device = in.readLine(); String name = in.readLine(); String msg = in.readLine(); int version = -1; String test = "hej"; try{ test = in.readLine(); version = Integer.parseInt(test); }catch(Exception e){ e.printStackTrace(); } Client: out.println(a); out.println(hash); out.println(Build.MODEL); out.println(Build.DEVICE); String name = fixName(); if(name == null) name = "John Doe"; out.println(name); String msg = fixMsg(); if(msg == null) name = "nada"; out.println(msg); out.println(curversion); out.flush(); Sometimes, in the second case, the name, msg, and version (the string test) are null at the server side. The catch is triggered because test is null. curversion,a are ints, the rest are strings. Any ideas?

    Read the article

  • Intel RST accidentally selected wrong drive as system drive -- how to fix?

    - by Sean Killeen
    Question / TL;DR If Intel RST has marked a drive other than my RAID set as the system drive, how can I get it so that the RAID set is now seen as the system drive, and catch it up to my drive now? What Happened NOTE: Some perhaps unwise decisions are ahead. This is as best as I can recall the order of things. I had a 2x1TB RAID1 config. I bought the drives around the same time, and they started to die around the same time. I replaced 1st drive with a 2 TB drive before the other one's SMART errors got more serious. I waited for the RAID to replicate, then replaced the 2nd drive with a manufacturer's replacement. I got a second manufacturer's drive replacement and used it as a spare. so I now I had a 1TB/2TB drive in a RAID1 and another 1TB as a spare. The 1TB drive in the replacement set was bad from the manufacturer. Rather than mess with their refurbished stuff, I bought another 2 TB drive an upped the config to a 2x2TB RAID1 with the other, functioning manufacturer's drive as a spare. I made the mistake of trying to bring the other drive online to clean it out and the signatue clash killed my machine. When the machine rebooted, that drive was marked as the system drive. So, I have a 2x2TB RAID1 that is apparently offline, and 1 spare 1 TB refurbished drive that everything is being run from. Not a great idea. Options I'm considering Bring the 2x2TB drive back online, and then unplug the spare until I can format it in another system. This would involve some data loss, but the more I think about it, I actually think I haven't modified any data that isn't backed up or synced somewhere (go me!) Anything that isn't is likely trivial, enough that I'm willing to take the risk. One downside here is that if the 2 TB doesn't have data on it for some reason, I could be screwed trying to put the other drive back in, no? Try to somehow get the RAID1 updated with the data from the current system drive. Option 3?

    Read the article

  • Array values disappear in PHP SoapClient call to Cisco phone system.

    - by Jamin
    I am attempting to consume a SOAP service provided by our Cisco phone system (documentation), to get the current status of a given set of phones. I have an array of phone names, which I'm trying to pass to the service, however, the values of the array are being eaten somewhere Array of items like so: $items = array( 0 => "SEP0004F2E57F8C", 1 => "SEP001111BF8758", 2 => "SEP001320BD485C" ); Attempting to call the method: $client = new SoapClient( "https://x.x.x.x/realtimeservice/services/RisPort?wsdl", array( "login" => "admin", "password"=> "xxxxx", "trace" => true ) ); $devices = $client->SelectCmDevice( "", array( "SelectBy" => "Name", "Status" => "Any", "SelectedItems" => $items ) ); When I debug the complete request I get the following: <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope mlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://schemas.cisco.com/ast/soap/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"> <SOAP-ENV:Body> <ns1:SelectCmDevice> <StateInfo xsi:type="xsd:string"></StateInfo> <CmSelectionCriteria xsi:type="ns1:CmSelectionCriteria"> <MaxReturnedDevices xsi:nil="true"/> <Class xsi:nil="true"/> <Model xsi:nil="true"/> <Status xsi:type="xsd:string">Any</Status> <NodeName xsi:nil="true"/> <SelectBy xsi:type="xsd:string">Name</SelectBy> <SelectItems SOAP-ENC:arrayType="ns1:SelectItem[3]" xsi:type="ns1:SelectItems"> <item xsi:type="ns1:SelectItem"/> <item xsi:type="ns1:SelectItem"/> <item xsi:type="ns1:SelectItem"/> </SelectItems> </CmSelectionCriteria> </ns1:SelectCmDevice> </SOAP-ENV:Body> </SOAP-ENV:Envelope> The correct number of <Item elements were counted and inserted into the <SelectItems object, however, the actual item names themselves are gone. I would guess it needs to be <ItemSEP0004F2E57F8C</Item, etc., but I can't seem to figure out how to make it do that. Thank you in advance for any help!!!

    Read the article

  • apache sendmail: trying to change user "from" address from apache to domain account

    - by Wes
    I apologize if I am asking a question already answered, but my problem isn't really that I haven't found an answer. I have, in fact, found a half-dozen different "solutions" to my problem, tried them all, in various combinations, and have been consistently unsuccessful. The goal All I want to do is change the envelope "from" address for all email sent from [email protected] to [email protected], always. What I've already done I am running Apache, PHP, and sendmail on CentOS 5.5, [email protected]. We have an SMTP server at 192.168.0.4. The domain's email accounts are all at @domain.org. I have successfully set up "smart host" using this line in the sendmail.mc file: define(`SMART_HOST', `192.168.0.4')dnl Then I set up masquerading, and was hopeful this would solve it. I have this in the .mc file: FEATURE(`masquerade_entire_domain')dnl FEATURE(`masquerade_envelope')dnl FEATURE(`allmasquerade')dnl MASQUERADE_AS(`domain.org')dnl MASQUERADE_DOMAIN(`domain.org.')dnl MASQUERADE_DOMAIN(`localhost.localdomain.')dnl This rewrites "to" addresses, but not "from" addresses. Testing from the command line: sendmail -v [email protected] Always is shown from the local user (in this case root, or my local user account). I had read that "sendmail" command sometimes bypasses masquerading. Nevertheless, using the "mail" command has the same result. After that, I have explored several "solutions", including: mailertable virtusertable FEATURE(`accept_unresolvable_domains')dnl LOCAL_DOMAIN(`localhost.localdomain')dnl FEATURE(`genericstable')dnl /etc/mail/access file /etc/mail/local-host-names file /etc/mail/trusted-users file All to no affect. The last thing I've tried So, I decided to go in a different direction, and try to set the envelope "from" address via PHP, using either the configuration in /etc/php.ini, or adding the -f parameter to the mail() function or to sendmail command. If I run this command: sendmail -v -f [email protected] [email protected] I get this error in /var/log/maillog: Mar 30 08:56:16 localhost sendmail[24022]: p2UCuE8w024022: [email protected], size=5, class=0, nrcpts=1, msgid=<[email protected]>, relay=user@localhost Mar 30 08:56:19 localhost sendmail[24022]: p2UCuE8w024022: [email protected], [email protected] (500/502), delay=00:00:05, xdelay=00:00:03, mailer=relay, pri=30005, relay=[192.168.0.4] [192.168.0.4], dsn=5.1.1, stat=User unknown Mar 30 08:56:19 localhost sendmail[24022]: p2UCuE8w024022: p2UCuE8x024022: DSN: User unknown Mar 30 08:56:23 localhost sendmail[24022]: p2UCuE8x024022: [email protected], delay=00:00:04, xdelay=00:00:04, mailer=relay, pri=31029, relay=[192.168.0.4] [192.168.0.4], dsn=2.0.0, stat=Sent (Ok: queued as B5E2E40E0A2) Which is basically a "User unknown" 550 error. Help Please help. What do I need to change? Should I just start over in the sendmail.mc file? It has a ton of config options stuffed in it, over days of trying things. Why is changing the envelope "from" address via the command line generating a "User unknown" error?

    Read the article

  • file corruption on read/write 2.6.32-22-server (happens across many kernels)

    - by Jonathan
    Hi Guys, I'm having an issue where after the server has been up for a period of time (~week/few days) the server will start reading corrupt data. For instance when I run a sha1sum of a file after a fresh boot it remains the same. However after a while I will start to get segfaults and from then on whenever I read this file I get a different sha1sum. I've checked S.M.A.R.T with long tests and I've run an extended memtest86+(12 passes) My lspci is as follows: 00:00.0 Host bridge: Advanced Micro Devices [AMD] RS780 Host Bridge 00:01.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (int gfx) 00:06.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) 00:07.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 3) 00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI mode] 00:12.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:12.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller 00:12.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:13.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:13.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller 00:13.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 3c) 00:14.1 IDE interface: ATI Technologies Inc SB700/SB800 IDE Controller 00:14.3 ISA bridge: ATI Technologies Inc SB700/SB800 LPC host controller 00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge 00:14.5 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI2 Controller 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 01:05.0 VGA compatible controller: ATI Technologies Inc Radeon HD 3300 Graphics 01:05.1 Audio device: ATI Technologies Inc RS780 Azalia controller 02:00.0 Ethernet controller: Atheros Communications Atheros AR8121/AR8113/AR8114 PCI-E Ethernet Controller (rev b0) 03:00.0 FireWire (IEEE 1394): VIA Technologies, Inc. Device 3403 I could really use some help on this, do you have any idea what could cause this? It's really frustrating me as it seems to trigger entirely randomly and will not go away until I reboot. I'm also use KVM for virtualization as well as MD for software RAID on this server and the processor is a Phenom II X4 965. I don't believe it's the software raid however as this affects files also hosted on non-raid partitions so I don't know.

    Read the article

  • Are random packets normal?

    - by TheLQ
    About a month ago on one of my servers I started receiving random packets from IPs all over the world. So I did the smart thing and stopped putting off installing an IDS. This IDS is a ClearOS Gateway which comes with Snort and SnortSam. I enabled it, checked There is a total of 4 ports open, two of which forward to the server I'm talking about. These ports are 3724 and 8085, so they aren't going to be easily detected in a port scan. However checking some logs of this server I found that the attack is resuming. I found this ... Accepting connection from '75.166.155.122' [Auth] got unknown packet from '75.166.155.122' Accepting connection from '98.164.154.93' [Auth] got unknown packet from '98.164.154.93' Ping MySQL to keep connection alive Accepting connection from '70.241.195.129' [Auth] got unknown packet from '70.241.195.129' Accepting connection from '67.182.229.169' [Auth] got unknown packet from '67.182.229.169' Accepting connection from '69.137.140.38' [Auth] got unknown packet from '69.137.140.38' Accepting connection from '76.31.72.55' [Auth] got unknown packet from '76.31.72.55' Accepting connection from '97.88.139.39' [Auth] got unknown packet from '97.88.139.39' Accepting connection from '173.35.62.112' [Auth] got unknown packet from '173.35.62.112' Accepting connection from '187.15.10.73' [Auth] got unknown packet from '187.15.10.73' Accepting connection from '66.66.94.124' [Auth] got unknown packet from '66.66.94.124' Accepting connection from '75.159.219.124' [Auth] got unknown packet from '75.159.219.124' Accepting connection from '99.102.100.82' [Auth] got unknown packet from '99.102.100.82' Accepting connection from '24.128.240.45' [Auth] got unknown packet from '24.128.240.45' Accepting connection from '99.231.7.39' [Auth] got unknown packet from '99.231.7.39' Accepting connection from '206.255.79.56' [Auth] got unknown packet from '206.255.79.56' Accepting connection from '68.97.106.235' [Auth] got unknown packet from '68.97.106.235' Accepting connection from '69.134.67.251' [Auth] got unknown packet from '69.134.67.251' Accepting connection from '63.228.138.186' [Auth] got unknown packet from '63.228.138.186' Accepting connection from '184.39.146.193' [Auth] got unknown packet from '184.39.146.193' Accepting connection from '69.171.161.102' [Auth] got unknown packet from '69.171.161.102' Accepting connection from '76.0.47.228' [Auth] got unknown packet from '76.0.47.228' Ping MySQL to keep connection alive Accepting connection from '126.112.201.14' [Auth] got unknown packet from '126.112.201.14' Ping MySQL to keep connection alive Now that scares me. Why isn't Snort detecting this? How were they able to find this specific port? More importantly, what normally would these packets contain? Is this something I should be worried about? How can I stop this?

    Read the article

  • filtering itunes library items by file location

    - by Cawas
    3 answers and unfortunately no solution yet. The Problem I've got way more than 1000 duplicated items in my iTunes Library pointing to a non-existant place (the "where" under "get info" window), along with other duplicated items and other MIAs (Missing In Action). Is there any simple way to just delete all of them and only them? From the library, of course. By that I mean some MIAs are pointing to /Volumes while some are pointing to .../music/Music/... or just .../music/.... I want to delete all pointing to /Volumes as to later I'll recover the rest. Check the image below. Some Background I tried searching for a specific key word on the path and creating smart play list, but with no result. Being able to just sort all library by path would be a perfect solution! I believe old iTunes could do that. PowerTunes can do it (sort by path) but I can't do anything with its list. I would also welcome any program able to handle this, then import and properly export back the iTunes library. Since this seems to just not be clear enough... AppleScript doesn't work That's because AppleScript just can't gather the missing info anywhere in iTunes Library. Maybe we could use AppleScript by opening the XML file, but that's a whole nother issue. Here's a quote from my conversation with Doug the man himself Adams last december: I don't think you do understand. There is no way to get the path to the file of a dead track because iTunes has "forgotten" it. That is, by definition, what a dead track is. Doug On Dec 21, 2010, at 7:08 AM, Caue Rego wrote: yes I understand that and have seem the script. but I'm not looking for the file. just the old broken path reference to it. Sent from my iPhone On 21/12/2010, at 10:00, Doug Adams wrote: You cannot locate missing files of dead tracks because, by definition, a dead track is one that doesn't have any file information. If you look at "Super Remove Dead Tracks", you will notice it looks for tracks that have "missing value" for the location property.

    Read the article

  • SSL support with Apache and Proxytunnel

    - by whuppy
    I'm inside a strict corporate environment. https traffic goes out via an internal proxy (for this example it's 10.10.04.33:8443) that's smart enough to block ssh'ing directly to ssh.glakspod.org:443. I can get out via proxytunnel. I set up an apache2 VirtualHost at ssh.glakspod.org:443 thus: ServerAdmin [email protected] ServerName ssh.glakspod.org <!-- Proxy Section --> <!-- Used in conjunction with ProxyTunnel --> <!-- proxytunnel -q -p 10.10.04.33:8443 -r ssh.glakspod.org:443 -d %host:%port --> ProxyRequests on ProxyVia on AllowCONNECT 22 <Proxy *> Order deny,allow Deny from all Allow from 74.101 </Proxy> So far so good: I hit the Apache proxy with a CONNECT and then PuTTY and my ssh server shake hands and I'm off to the races. There are, however, two problems with this setup: The internal proxy server can sniff my CONNECT request and also see that an SSH handshake is taking place. I want the entire connection between my desktop and ssh.glakspod.org:443 to look like HTTPS traffic no matter how closely the internal proxy inspects it. I can't get the VirtualHost to be a regular https site while proxying. I'd like the proxy to coexist with something like this: SSLEngine on SSLProxyEngine on SSLCertificateFile /path/to/ca/samapache.crt SSLCertificateKeyFile /path/to/ca/samapache.key SSLCACertificateFile /path/to/ca/ca.crt DocumentRoot /mnt/wallabee/www/html <Directory /mnt/wallabee/www/html/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> <!-- Need a valid client cert to get into the sanctum --> <Directory /mnt/wallabee/www/html/sanctum> SSLVerifyClient require SSLOptions +FakeBasicAuth +ExportCertData SSLVerifyDepth 1 </Directory> So my question is: How to I enable SSL support on the ssh.glakspod.org:443 VirtualHost that will work with ProxyTunnel? I've tried various combinations of proxytunnel's -e, -E, and -X flags without any luck. The only lead I've found is Apache Bug No. 29744, but I haven't been able to find a patch that will install cleanly on Ubuntu Jaunty's Apache version 2.2.11-2ubuntu2.6. Thanks in advance.

    Read the article

  • How to enjoy DVD on Apple iPad

    - by user44251
    I believe many people spent a sleepless night yesterday waiting for the new Apple Tablet to come, just a few days ago or perhaps longer I noticed fierce debate about it, its name, size, capacity, processor, main features, price etc. And now, they can take a long breath with the new Apple Tablet named iPad officially released on 28, January, 2010 (Beijing Time). But I know a new battle just begins. iPad, sounds somewhat like iPod and it really shares some similarities in terms of shape like smart, light and portable. It has a 9.7-inch, LED-backlit, IPS display with a remarkable precise Multi-Touch screen. And yet, at just 1.5 lbs and 0.5 inches thin, it's easy to carry and use everywhere. It can greatly facilitates your experience with the web, emails, photos and videos. Right now, it can run almost 140.000 of the apps on the Apple store. It can even run the apps you have downloaded for your iPhone or iPod touch. But so far, I haven't seen any possibility that it can work with DVD, probability there is no built-in DVD-ROM or DVD player which can play DVD directly. As Apple iPad states, the video formats supported are MPEG-4 (MP4, M4V), H.264, MOV etc and audio formats accepted are AAC, Proteceted AAC, MP3, AIFF and WAV etc, those are formats that are commonly used with iMac. This could really a hard nut to crack if you want to watch your favourite DVD on this magic Apple iPad. But don't worry, there is still way out, you just need a few steps for ripping and importing DVD movies to Apple iPad with a simple application DVD to iPad converter What's on DVD to iPad Converter for Mac DVD to iPad converter for Mac is a powerful and professional application designed for the newly released Apple iPad which can rip, convert your DVD contents to Apple iPad compatible MPEG-4 (MP4, M4V), H.264, MOV etc, and other popular file formats like AVI, WMV, MPG, MKV, VOB, 3GP, FLV etc can also be converted so that you can put on your portable devices like iPod, iPhone, iRiver, BlackBerry etc. Besides, it can also extract audio from DVD videos and save as MP3, AIFF, AAC, WAV etc. Mac DVD to iPad converter has also been enhanced that can run both on PowerPC and Intel (Snow Leopard included). It can offer versatile editing features which allows you to make your own DVD videos. For example, you can cut your DVD to whatever length you like by Trim, crop off unwanted parts from DVD clips by Crop, add special effect like Gray, Emboss and Old film to make your videos more artistic. Besides, its built-in merging feature and batch mode allows you to join several DVD clips into a single one and do batch conversion. And more features can be expected if you afford a few minutes to try.

    Read the article

  • How to display specific data from a file

    - by user1067332
    My program is supposed to ask the user for firstname, lastname, and phone number till the users stops. Then when to display it asks for the first name and does a search in the text file to find all info with the same first name and display lastname and phones of the matches. import java.util.*; import java.io.*; import java.util.Scanner; public class WritePhoneList { public static void main(String[] args)throws IOException { BufferedWriter output = new BufferedWriter(new FileWriter(new File( "PhoneFile.txt"), true)); String name, lname, age; int pos,choice; try { do { Scanner input = new Scanner(System.in); System.out.print("Enter First name, last name, and phone number "); name = input.nextLine(); output.write(name); output.newLine(); System.out.print("Would you like to add another? yes(1)/no(2)"); choice = input.nextInt(); }while(choice == 1); output.close(); } catch(Exception e) { System.out.println("Message: " + e); } } } Here is the display code, when i search for a name, it finds a match but displays the last name and phone number of the same name 3 times, I want it to display all of the possible matches with the first name. import java.util.*; import java.io.*; import java.util.Scanner; public class DisplaySelectedNumbers { public static void main(String[] args)throws IOException { String name; String strLine; try { FileInputStream fstream = new FileInputStream("PhoneFile.txt"); // Get the object of DataInputStream DataInputStream in = new DataInputStream(fstream); BufferedReader br = new BufferedReader(new InputStreamReader(in)); Scanner input = new Scanner(System.in); System.out.print("Enter a first name"); name = input.nextLine(); strLine= br.readLine(); String[] line = strLine.split(" "); String part1 = line[0]; String part2 = line[1]; String part3 = line[2]; //Read File Line By Line while ((strLine= br.readLine()) != null) { if(name.equals(part1)) { // Print the content on the console System.out.print("\n" + part2 + " " + part3); } } }catch (Exception e) {//Catch exception if any System.out.println("Error: " + e.getMessage()); } } }

    Read the article

  • Dealing with personal failure

    - by codeelegance
    A while ago I was given the task of updating and extending the functionality of a software project. I was given a year to make the needed changes working solo. A month into development I came to the conclusion that it would take longer to change the existing product than to rewrite it from the ground up. I'd never attempted a complete rewrite so I talked with my boss about it and he was thrilled with the idea. I'm a fan of agile development but had never had the opportunity to take advantage of all of the prescribed practices so when I set to work I tried to incorporate as many as I could. I didn't have direct access to the customer and my coworkers (non-programmers) knew the business domain but were already so busy they didn't really have time to participate in design meetings so I resigned to working in the dark and occasionally calling one of them over to my desk to get feedback on my progress. I used TDD and refactored mercilessly and even tried taking a domain driven design approach. Things went well for a while. As the deadline came closer and the complexity of the project grew my productivity start slipping. I found myself cutting corners and ignoring the practices I had established as the pressure increased to meet the deadline. I also started working late nights and weekends to keep up with the load. In the end it made little difference how hard I worked. The project missed its deadline and what was completed wasn't enough to give to the customer. I had failed. Not only had I not finished on time but the previous version had sat untouched for almost a year so it wouldn't be of any help. Luckily we had another product that offered some of the same functionality. My boss decided to cancel the project entirely and moved all our orphaned customers to the other product. I spent weeks (along with everyone else at the company) manning the phones providing technical support for those customers. After it was all over, my boss was gracious enough not to fire me for nearly ruining the company. I was moved to the other product and have been trying to redeem myself ever since. Where did I go wrong? Has anyone else had to deal with this kind of defeat? How did you recover?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >