Search Results

Search found 1540 results on 62 pages for 'mr ed'.

Page 52/62 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • How to run WordPress and Java web app running on Tomcat on the same server?

    - by Chantz
    I have to run a WordPress site served via Apache2 & Java-based webapp using Tomcat on the same server. When users come to example.com or example.com/public-pages they need to served from WordPress but when they come to example.com/private-pages they need to be served from the Tomcat. I have asked this question on serverfault where they suggested using different port, different IP & sub-domain. I want to go for different port solution since it will mean I need to buy only one SSL certificate. I tried doing the reverse proxy method by having the following in my default-ssl.conf <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName localhost:443 DocumentRoot /var/www <Directory /var/www> #For Wordpress Options FollowSymLinks AllowOverride All </Directory> <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass /private-pages ajp://localhost:8009/ ProxyPassReverse /private-pages ajp://localhost:8009/ SSLEngine on SSLProxyEngine On SSLCertificateFile /etc/apache2/ssl/apache.crt SSLCertificateKeyFile /etc/apache2/ssl/apache.key </VirtualHost> As you have noticed I am using mod_proxy_ajp in Apache2 for this. And that my Tomcat is listening to port 8009 and then serving content. So now when I go to example.com/private-pages I am seeing the content from my Tomcat. But 2 issues are happening. All my static resources are getting 404-ed, so none of my images, CSS, js are getting loaded. I see that the browser is requesting for the resources using URL example.com/css/* This will clearly not work because it translates to example.com:80/css/* instead of example.com:8009/css/* & there are no such resources in the WordPress directory. If I go to example.com/private-pages/abcd I am somehow kicked to the WordPress site (which obviously displays a 404 page). I can understand why #1 is happening but have no clue why the #2 is happening. Regardless, if there is another clean solution for resolving this, I would appreciate y'alls help.

    Read the article

  • BSOD Dump - EXCEPTION_DOUBLE_FAULT - ON Windows 2008 Server 64bit

    - by Mark K
    Hello, my windows 2008 server (datacenter ed) 64bit , have recently created a series of BSOD on a different applications. the error message is in general EXCEPTION_DOUBLE_FAULT. Can anyone please help with the analysis of the dump file bellow- Best regards, Mark 2: kd !analyze -v * Bugcheck Analysis * * UNEXPECTED_KERNEL_MODE_TRAP (7f) This means a trap occurred in kernel mode, and it's a trap of a kind that the kernel isn't allowed to have/catch (bound trap) or that is always instant death (double fault). The first number in the bugcheck params is the number of the trap (8 = double fault, etc) Consult an Intel x86 family manual to learn more about what these traps are. Here is a portion of those codes: If kv shows a taskGate use .tss on the part before the colon, then kv. Else if kv shows a trapframe use .trap on that value Else .trap on the appropriate frame will show where the trap was taken (on x86, this will be the ebp that goes with the procedure KiTrap) Endif kb will then show the corrected stack. Arguments: Arg1: 0000000000000008, EXCEPTION_DOUBLE_FAULT Arg2: 0000000080050033 Arg3: 00000000000006f8 Arg4: fffff800018b1678 Debugging Details: BUGCHECK_STR: 0x7f_8 CUSTOMER_CRASH_COUNT: 1 DEFAULT_BUCKET_ID: DRIVER_FAULT_SERVER_MINIDUMP PROCESS_NAME: CustomerService. CURRENT_IRQL: 1 EXCEPTION_RECORD: fffffa6004e45568 -- (.exr 0xfffffa6004e45568) ExceptionAddress: fffff800018a0150 (nt!RtlVirtualUnwind+0x0000000000000250) ExceptionCode: 10000004 ExceptionFlags: 00000000 NumberParameters: 2 Parameter[0]: 0000000000000000 Parameter[1]: 00000000000000d8 TRAP_FRAME: fffffa6004e45610 -- (.trap 0xfffffa6004e45610) NOTE: The trap frame does not contain all registers. Some register values may be zeroed or incorrect. rax=0000000000000050 rbx=0000000000000000 rcx=0000000000000004 rdx=00000000000000d8 rsi=0000000000000000 rdi=0000000000000000 rip=fffff800018a0150 rsp=fffffa6004e457a0 rbp=fffffa6004e459e0 r8=0000000000000006 r9=fffff8000181e000 r10=ffffffffffffff88 r11=fffff80001a1c000 r12=0000000000000000 r13=0000000000000000 r14=0000000000000000 r15=0000000000000000 iopl=0 nv up ei pl zr na po nc nt!RtlVirtualUnwind+0x250: fffff800018a0150 488b02 mov rax,qword ptr [rdx] ds:00000000000000d8=???????????????? Resetting default scope LAST_CONTROL_TRANSFER: from fffff800018781ee to fffff80001878450 STACK_TEXT: fffffa6001768a68 fffff800018781ee : 000000000000007f 0000000000000008 0000000080050033 00000000000006f8 : nt!KeBugCheckEx fffffa6001768a70 fffff80001876a38 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiBugCheckDispatch+0x6e fffffa6001768bb0 fffff800018b1678 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiDoubleFaultAbort+0xb8 fffffa6004e44e30 fffff800018782a9 : fffffa6004e45568 0000000000000001 fffffa6004e45610 000000000000023b : nt!KiDispatchException+0x34 fffffa6004e45430 fffff800018770a5 : 0000000000000000 0000000000000000 0000000000000000 0000000000000001 : nt!KiExceptionDispatch+0xa9 fffffa6004e45610 fffff800018a0150 : fffffa6004e46638 fffffa6004e46010 fffff80001965190 fffff8000181e000 : nt!KiPageFault+0x1e5 fffffa6004e457a0 fffff800018a3f78 : fffffa6000000001 0000000000000000 0000000000000000 ffffffffffffff88 : nt!RtlVirtualUnwind+0x250 fffffa6004e45810 fffff800018b1706 : fffffa6004e46638 fffffa6004e46010 fffffa6000000000 0000000000000000 : nt!RtlDispatchException+0x118 fffffa6004e45f00 0000000000000000 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiDispatchException+0xc2 STACK_COMMAND: kb FOLLOWUP_IP: nt!KiDoubleFaultAbort+b8 fffff800`01876a38 90 nop SYMBOL_STACK_INDEX: 2 SYMBOL_NAME: nt!KiDoubleFaultAbort+b8 FOLLOWUP_NAME: MachineOwner MODULE_NAME: nt IMAGE_NAME: ntkrnlmp.exe DEBUG_FLR_IMAGE_TIMESTAMP: 4a7801eb FAILURE_BUCKET_ID: X64_0x7f_8_nt!KiDoubleFaultAbort+b8 BUCKET_ID: X64_0x7f_8_nt!KiDoubleFaultAbort+b8 Followup: MachineOwner

    Read the article

  • Mouse wheel in VirtualPC (mostly) does not work on 64-bit Windows 7 RC

    - by JonStonecash
    I have recently upgraded my laptop from WinXP Pro (32-bit) to Windows 7 RC (64-bit). I have a number of VirtualPC 2007 images that I use for testing on various platforms and looking at beta software. I have installed the 64-bit version of VirtualPC. The images all work with the exception of the mouse wheel within the virtual machine. I have tried this out with WinXP Pro, Windows 7 RC, and Windows Server 2008 images. All are 32-bit and all exhibit the same behavior: a gentle rotation of the wheel does nothing; a quick rotation of the wheel sometimes gets a scroll and sometimes not. I regard this behavior as unusable as I tend to use the mouse wheel a lot. All of this worked just fine on WinXP. I have re-installed the Virtual machine additions on all of the machines. The Windows 7 RC virtual image was created after the upgrade to Windows 7 and the 64-bit version of VirtualPC (just to isolate the possibility that I had corrupted the images during the transition). I have googled, binged, and yahoo-ed. There are scattered mentions of this problem (dating back to VPC 2004) but no solutions. I am aware that I could start up one of these images and then use remote desktop connections to get access to that image. I, in fact, do just that for some development that I am doing; the mouse works just fine. This is acceptable in this case because I spend hours at a time in the development VM. These test environments are different in that I will bring up an image for just a short time: minutes rather than hours. Adding the rdc step is much more significant in these cases. Does anyone have any idea of what to do next?

    Read the article

  • Netcat (nc) traditional package for RHEL 6.x?

    - by HTTP500
    I'm trying to use the Percona Apache Monitoring [Cacti] Template for Memcached. They do indeed warn that you can't use the openbsd version of the package and provide a solution for Ubuntu/Debian users, i.e.: You need nc on the server. Some versions of nc accept different command-line options. You can change the options used by configuring the PHP script. If you don’t want to do this for some reason, then you can install a version of nc that conforms to the expectations coded in the script’s default configuration instead. On Debian/Ubuntu, netcat-openbsd does not work, so you need the netcat-traditional package, and you need to switch to /bin/nc.traditional... Since the RHEL 6.x version indeed comes from openbsd (confirmed by rpm -qi nc) how does one go about getting this installed on RHEL/CentOS? Anyone else running these Percona templates on RHEL/CentOS? What did you do? alien the Debian package? Update 1: FWIW, I tried to use GNU netcat by compiling it from source but it doesn't seem to have the exact options required by the Cacti template either (i.e. there is no analogy for -C or -q1 so it seems) Update 2: I alien[ed] the netcat-traditional_1.10-38_amd64.deb package to make a .tgz and it does produce a binary "nc.traditional" and that version has the -q option but no -C Cheers

    Read the article

  • Can an image based backup potentially corrupt data?

    - by ServerAdminGuy45
    I'm considering doing image based backups (Acronis) on production Windows systems during non-peak hours. I'm just wondering if they can potentially lead to application data corruption. Lets say that I have a database that is getting hit pretty hard. Could I potentially have the beginning blocks of the database be commit ed to the image, data inserted into the db (which changes the beginning blocks of the DB on the server but not the image), then the blocks of data committed to the image (leading to an inconsistent state). Here's an example of what I'm trying to illustrate. Imagine a simple data structure which has a number in the front which represents the number of "a"s in a file. The number and data are delimited by a "-". For example: 4-ajjjjjjjajuuuuuuuaoffffa If an "a" is changed, the datastructure resets the number in the begining of the file such as: 3-ajjjjjjjajuuuuuuuboffffa I assume acronis writes block by block being a straight up image so here is what i'm invisioning happening with my database t0: 4-ajjjjjjjajuuuuuuuaoffffa ^pointer is here t1: 4-ajjjjjjjajuuuuuuuaoffffa ^pointer is here (all data before this is comitted to the image) t2: 4-ajjjjjjjajuuuuuuuboffffa ^pointer is here (all data before this is comitted to the image) Also notice how one of the "a"s change to a b. There are only 3 "a"s now t3: 4-ajjjjjjjajuuuuuuuboffffa ^pointer is here (all data before this is comitted to the image) The final image now reads "4-ajjjjjjjajuuuuuuuboffffa", while the true data is "3-ajjjjjjjajuuuuuuuboffffa" leading to a corrupt "database". Basically changes further down the blockchain could be reflected in the image, while important header and synchronization could already be committed. The out of date header information doesn't accurately reflect the structure of the blocks to come.

    Read the article

  • Returning a 404 page when a folder is accessed from one domain, but allowing access from other domains and IP addresses

    - by okw
    Situation: I want to return a 404 page ("404.php") when a folder ("hidden") is accessed from the example.com domain. I want the same folder to be accessible from a subdomain ("hidden.example.com") or from a different domain ("hidden.com") which are both configured in a single VirtualHost entry. The server has multiple IP addresses that it listens on. Each IP address serves identical content from the example.com domain (sharing a VirtualHost entry.) I want the folder to be accessible from the IP address. The server is configured to use SSL/TLS/HTTPS. HTTPS is optional on example.com, but HTTPS is enforced in the .htaccess file for the hidden folder using a rewrite rule shown below. /www/hidden/.htaccess RewriteCond %{HTTPS} !=on RewriteRule .* https://%{SERVER_NAME}%{REQUEST_URI} [R,L] I know that {SERVER_ADDR} gives the server's IP address, but does it return the one that the client is requesting from? I'm also starting to think that something in the VirtualHosts file would be more appropriate. Any thoughts on this? What should be allowed: http://87.65.43.21/hidden/ https://87.65.43.21/hidden/ http://12.34.56.78/hidden/ https://12.34.56.78/hidden/ http://hidden.example.com/ https://hidden.example.com/ http://hidden.com/ https://hidden.com/ http://www.hidden.com/ https://www.hidden.com/ What should be 404-ed with 404.php http://example.com/hidden/ https://example.com/hidden/ http://www.example.com/hidden/ https://www.example.com/hidden/ http://example.com/hidden/hiddenfile.php https://example.com/hidden/hiddenfile.php etc. Thanks.

    Read the article

  • Mustek 1200 CP driver SFC4.SYS bluescreens with BAD_POOL_HEADER

    - by Slink84
    I have Windows XP SP2. Recently it started bluescreening right after starting up with 'BAD_POOL_HEADER', 0x00000019 error caused by SFC4.SYS driver. After googling for a while I've found out that this is my Mustek's 1200 CP scanner driver. Booting in safe mode and uninstalling it solved the problem... And created another one: now I can't use my scanner. The weird thing is, that it has been working for a while on this PC without any problems. It all started suddenly, and I can't remember installing anything that might have affected it. Reverting to several earlier system restore points didn't help. I've tried re-installing it from the Mustek website, just in case if my copy got corrupted or infected by a virus, but it did not help - it still bluescreens. Also, I've installed Avast and scanned my PC - there were no viruses found. If anyone had such a problem before or has an idea what might have caused it, please help. ED: @Michael Todd: ...try installing on another PC... I've installed it on my friends PC. He has the same OS version, with the latest updates just like mine (he wasn't too happy, even after I've assured him that it is easy to fix by uninstalling that driver :] ). It worked fine - no bluescreens or whatsoever. So I think I've narrowed it down to either BIOS settings, or some wicked driver conflict. Next thing I'm going to try is to re-install XP, or install windows 7. I'm not too happy with a prospect of mucking about with BIOS settings...

    Read the article

  • Deciphering an IIS6 Httperr log file

    - by smackaysmith
    We have a Windows 2003 R2 SP2 server with iis6 that is creating a 1024kb httperr file every minute. I can't figure out what I'm looking at. Here's a snippet: 2010-03-24 13:15:05 10.53.2.35 1667 10.53.2.12 80 HTTP/1.1 PUT /hserver.dll?&V01|&IMAC=0080646077AB|CID=32|CN=LWT0080646077AB|ED=1|IP=10.53.2.35|SM=255.255.255.0|GW=10.53.2.1|SN=10.53.2.255|DM=logs.com|1D=10.53.2.12|2D=10.101.2.12|0D=1|AL=/usr/sbin/netxserv|AV=4.1.0.0|CP=VIAüEstherüprocessorüü800MHz|CPS=800|RM=190512|B1=1.18|PD2=1024x768x16ü@ü60Hz|IM=6.6.2-02|CI=3600|SN#=6KHDG301300|OS=23|VI=1|P1=24|TZO=-301|TZ=CDT|FS=128|MD=2003-04|CO=|LO=|AP0=BaseüSystem|NA|6.6.2-02|AP1=RapportüAgent|NA|4.1.0-3.26|AP2=TrueType|NA|6.8.0-3.4|AP3=WebFonts|NA|2.0.4-3.6|AP4=TrueTypeüFonts|NA|6.8.0-3.5|AP5=Network_login|NA|1.0.0-1.0.3|AP6=ScreenüSaver|NA|3.13|AP7=DMonitor|NA|1.0.0-0.4.0|AP8=MozillaüFirefox_15|NA|1.5.0.8-3.6|AP9=RemoteüShadow|NA|3.17|AP10=RemoteüDesktop|NA|1.6.0-1.0|AP11=SNMP|NA|5.1.3.1-3.13|AP12=LinuxüPrinting|NA|3.8.27-3.33|AP13=SSH|NA|3.8.1-3.25|AP14=ThinPrint|NA|6.2.87-0.2|AP15=XDMCP|NA|6.8.0-3.29|AP16=Ericom|NA|8.2.0-3.29|AP17=Daylightüsavingütimeüupdate|NA|1.1.0-1.0.0| 411 - LengthRequired - What on earth am I looking at? Nothing in the system or app logs. Finally, in iis manager, Default Web Site label has boxes instead of spaces. Very odd.

    Read the article

  • Home network with two isolated separate subnets, running on cablemodem/router and WRT-router.

    - by Johan Allgoth
    I have a new connection with a nice new router/cable-modem. I'd like to setup it up optimally and needs some pointers. I am a complete n00b when it comes to routing. I want to end up with two separate subnets, 10.1.2.0/24 and 192.168.1.0/24 each available on their own wireless channel/SSID. Both firewalled. I want my wired computers on the gigabit switch, optimally with public ips. I want to be able to reach 192.168.1.0/24 from 10.1.2.0/24, but not vice versa. Everyone should have internet access. Hardware and capabilities: Netgear CG3100. Handles cable connection. Gigabit switch. 802.11n. Can do DHCP, firewall, NAT etc. Can choose subnet. Can turn of NAT and if so hand out up to 4 public ips. Somewhat challenged when it comes to configuration. WRT-router. Runs DD/Open-WRT very stable. 100 Mbit switch. 802.11.g Can do DHCP, firewall, NAT etc. Can choose subnet. Highly configurable. I hope to be able to keep 10.1.2.0/24 on the CG3100, for speed reasons and 192.168.0.0/24 on the WRT-router for quota and user control reasons. On my 10.1.2.0/24 network I plan on running servers for various services. Should I turn of NAT on the WRT-router? Or on the cable modem? Activate what in that case? Is double NAT always f-ed up?

    Read the article

  • Iframe pages on Facebook does not show in Internet Explorer 9 - Windows 7 64-bit

    - by Morten
    Have this very irritating problem with Internet Explorer 9 and Facebook. If I go to Facebook and watch a page with iframes (like IFBML pages) it will not show up in Internet Explorer 9. It shows up in Firefox 4 and Chrome 10, but not in Internet Explorer 9. I run Windows 7 64-bit SP1 (danish). The strange thing is that I own three different PC´s and they all run Windows 64-bit SP1 and all of them has this issue. Can´t figure out what causes this issue. I have tried the following: Uninstalled AVG antivirus and installed Microsoft Antivirus - no change Updated Windows with SP1 - no change Updated from Internet Explorer 9 beta to Internet Explorer 9 final Ed. - no change Emptied cache and temp files in Internet Explorer 9 - no change Made www.facebook.com a trusted site in Internet Explorer 9 - no change And a lot of other things I can not remember I guess....but nothing seems to work. As I´m using quite a lot of my working time developing Facebook Fanpages it is frustrating not to be able to test them in Internet Explorer 9. BTW - it is Internet Explorer 9 32-bit - not 64-bit. Any clues?

    Read the article

  • Remote Socket Read In Multi-Threaded Application Returns Zero Bytes or EINTR (104)

    - by user39891
    Hi. Am a c-coder for a while now - neither a newbie nor an expert. Now, I have a certain daemoned application in C on a PPC Linux. I use PHP's socket_connect as a client to connect to this service locally. The server uses epoll for multiplexing connections via a Unix socket. A user submitted string is parsed for certain characters/words using strstr() and if found, spawns 4 joinable threads to different websites simultaneously. I use socket, connect, write and read, to interact with the said webservers via TCP on their port 80 in each thread. All connections and writes seems successful. Reads to the webserver sockets fail however, with either (A) all 3 threads seem to hang, and only one thread returns -1 and errno is set to 104. The responding thread takes like 10 minutes - an eternity long:-(. *I read somewhere that the 104 (is EINTR?), which in the network context suggests that ...'the connection was reset by peer'; or (B) 0 bytes from 3 threads, and only 1 of the 4 threads actually returns some data. Isn't the socket read/write thread-safe? I use thread-safe (and reentrant) libc functions such as strtok_r, gethostbyname_r, etc. *I doubt that the said webhosts are actually resetting the connection, because when I run a single-threaded standalone (everything else equal) all things works perfectly right, but of course in series not parallel. There's a second problem too (oops), I can't write back to the client who connect to my epoll-ed Unix socket. My daemon application will hang and hog CPU 100% for ever. Yet nothing is written to the clients end. Am sure the client (a very typical PHP socket application) hasn't closed the connection whenever this is happening - no error(s) detected either. Any ideas? I cannot figure-out whatever is wrong even with Valgrind, GDB or much logging. Kindly help where you can.

    Read the article

  • Blink build with Xcode failed

    - by Merci
    I found a GPL-ed SIP client for Mac, Blink. I'd like to build it from source since the binaries are only available as paid download. Just FYI i'm studying programming at university but have no experience in building complex application from source. After downloading the content of the repository i opened the Xcode project and tried to build on OS X 10.7, Xcode 4.2.1. Unfortunately the build fail with 1 error and many warnings Most of the warnings are like this: Attribute Unavailable: Custom Identifiers in Interface Builder versions prior to 3.2 The error message is: Apple Mach-O Linker (ld) Error Command /Developer/usr/bin/clang failed with exit code 1 preceded by the warning Apple Mach-O Linker (ld) Warning directory not found for option '-L/Users/Sergio/Downloads/Blink/devel.ag-projects.com/repositories/public/blink-cocoa/Distribution/Frameworks' I notice that in the list of required files i have this files missing: Dependencies/Frameworks libgcrypt.11.6.0.dylib libgcrypt.11.dylib libgnutls-extra.26.dylib libgnutls.26.dylib libgpg-error.0.dylib libintl.8.dylib liblzo.1.dylib libtasn1.3.dylib Dependencies/Resources lib Frameworks/Linked Frameworks Sparkle.framework Products Blink.app It should be possible to download these files somewhere. Unfortunately googling did not help. There's no documentation on the project site.

    Read the article

  • Is this way of using Excel 2007 Pivot table for BI scalable ?

    - by Sim
    Hi all, Background: We need to consolidate sales data across the country to do analysis Our Internet connection/IT expertise/IT investment is not quite strong, therefore full BI solution is out of question I tried several SaaS BI solution (GoodData, ZohoReports) and while they're good, they seem not to fully support what we need We're looking at 'bout 2 millions record for every 2 months My current approach Our (10) sites currently gathers data from all their branches and consolidate them into 1 Excel file with Pivot table and embed source data In HQ, I will request 10 sites to send back those Excel files periodically We will import those Excel to our MSSQL server There will be a master Excel file, that will also have the same pivot table (as those came from site Excel file), and datasource is the MSSQL server More details For testing, I currently use MSSQL 2008 Express on my laptop So far, I imported our transactions for the past 2 months and there are 2 millions+ row in 1 table in MSSQL (we just use 1 table, corresponding to our common pivot table structure). DB size is ~ 600 MB In the master Excel file, if not including the source data, it's just < 10MB. Including the source data will increase the size to 60 MB (so I supposed Office 2007 automatically zip the data ?) I try using the Pivot (drag-and-drop fields) and the performance so far is OK (my laptop specs: C2D T7200, 3GB RAM, Windows XP) So my question is : If we're looking at full year transaction (roughly 15 millions rows in MSSQL 2008 Express, 3.6 GB in size), is there any issue with that 15 million rows in 1 table in SQL Express ? Is there any performance issue with the pivot table at that time ? Can it still embed the source data ? (I google-ed but didn't find the maximum size of source data Excel 2007 can embed) Any other suggestions on how we can better do this ? Given that we can't afford the full BI solution, any light-weight/budget/SaaS BI that you can recommend ? Thanks

    Read the article

  • Sane patch schedule for Windows 2003 cluster

    - by sixlettervariables
    We've got a cluster of 75 Win2k3 nodes at work in a coarse grained compute cluster. The cluster is behind a mountain of firewalls and resides in its own VLAN. Jobs of all sizes and types run on the cluster and all of the executables running are custom-made. (ed: additional notes on our executables) The jobs range from 30 seconds to 7 days in duration, and may contain one executable or 2000 sub-jobs (of short duration). Obviously we are trying to avoid the situation where our IT schedules a reboot during a 7 day production job. We have scheduling software which accomodates all of the normal tasks for a coarse grained cluster and we can control which machines are active for submission, etc. If WSUS was in some way scriptable (or the client could state it's availability for shutdown) we could coordinate the two systems and help out. Currently, the patch schedule is the Sunday after Super Tuesday regardless of what is running on the cluster. We have to ask for an exemption every time we want to delay patching a machine for a long running production job. Basically, while our group is responsible for the machines we have little control over IT's patch schedule. Is patching monthly with MS's schedule sane for a production Windows cluster? Are there software hooks in WSUS where we could say, "please don't reboot just yet"?

    Read the article

  • Proper end of day sequence to maintain monitor config

    - by WarmBeer
    I've got an HP EliteBook 6930p that travels from home, where it is connected to individual cables, and work where there is a docking station. At both locations I have an external monitor as the secondary monitor and like to have the laptop screen as the primary, i.e. with the task bar. At the end of the day I close the laptop, which is supposed to set it to standby. When I get home I plug in the power cord and the external monitor cord and open the computer. When heading into work I close the computer and unplug everything. Inevitably when I open the computer at the new location the monitors are reversed, i.e. the primary, task bar display is on the external monitor and the laptop shows the secondary, even though when i click identify the laptop has the 1. I then have to disable the secondary display, switch the primary to the laptop and re-enable the secondary. I've tried locking the computer before closing and occasionally that works to keep the setup in place but not always. Any suggestions for how to keep the config in place during transport? ed

    Read the article

  • Hide/Replace Nginx Location Header?

    - by Steven Ou
    I am trying to pass a PCI compliance test, and I'm getting a single "high risk vulnerability". The problem is described as: Information on the machine which a web server is located is sometimes included in the header of a web page. Under certain circumstances that information may include local information from behind a firewall or proxy server such as the local IP address. It looks like Nginx is responding with: Service: https Received: HTTP/1.1 302 Found Cache-Control: no-cache Content-Type: text/html; charset=utf-8 Location: http://ip-10-194-73-254/ Server: nginx/1.0.4 + Phusion Passenger 3.0.7 (mod_rails/mod_rack) Status: 302 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 3.0.7 X-Runtime: 0 Content-Length: 90 Connection: Close <html><body>You are being <a href="http://ip-10-194-73-254/">redirect ed</a>.</body></html> I'm no expert, so please correct me if I'm wrong: but from what I gathered, I think the problem is that the Location header is returning http://ip-10-194-73-254/, which is a private address, when it should be returning our domain name (which is ravn.com). So, I'm guessing I need to either hide or replace the Location header somehow? I'm a programmer and not a server admin so I have no idea what to do... Any help would be greatly appreciated! Also, might I add that we're running more than 1 server, so the configuration would need to be transferable to any server with any private address.

    Read the article

  • Nginx flv audio pseudo stream works but video is not loading

    - by sarah
    I am working on a development server for a company & they want nginx webserver to work with. So the requirements for their company is, it should be capable of doing following things i.e hotlink protection, mp4 & flv pseudo stream & secure streaming. However nginx fulfills their requirements and i am configuring their server from past 2 days as i am new to this field so i've only acheived hotlinking prevention in past 2 days. But the problem on which i am stuck is flv pseudo streaming, to make work to mp4 pseudo stream it was just a piece of paper but i am really fuc*ed up with flv pseudo stream. I have converted my flv videos with flvmdi tools to insert many keyframes but the problem is , when i try to seek video from following keyframes that are generated by flvmdi i.e test.flv?start=2681223, video does not load but audio pseudo works fine. So it means no problem with my flv configuration in nginx.conf file. And the forum that i used to compile my nginx-1.2.1 is http://h264.code-shop.com/trac/wiki/Mod-H264-Streaming-Nginx-Version2 & by adding additional module --with-http_flv_module. This forum is really active, hopes i will resolve my problem as soon as you guys will provide me some guide.

    Read the article

  • Vmware Workstation, Win7 host, Ubuntu guests with Nat + Host-only networks but they cannot connect to the Internet

    - by Ikon
    I have a Win7 host machine with Vmware Workstation. In the workstation I have 3 Ubuntu installed. All 3 Ubuntu guests have a Nat network - to access the internet without asking the router for a local address - and a Host-only network - to connect all Ubuntu quests and the host in a private network for internal communication, without touching the router. When I try to make any of the Ubuntu quests to get data from the internet - assuming that they would figure out that the Nat-ed interface can access the requested data - they fail and report that there is no route to my query. If I disconnect the 2nd interface on the Ubuntu guests with the Host-only network and restart networking, they start to know the route to the internet. Odd, during the installation of the guests they asked which of the 2 given interfaces - with Nat and Host-only config - should be used to get updates during installation and they oddly managed to get the updates. Not so after the installation has finished and rebooted. I have checked the Virtual Network Editor that the Nat interface should use my real network card to access the net, so there should be no problem. I wish not to use the router's dhcp service to give the Ubuntu quests an address, and also I don't want the guests to be accessable from the local network directly, but only by the host - that's the Host-only network is for. Any suggestions? Edit: 192.168.189.0 is the Nat interface and 192.168.7.0 is the Host-only. $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.7.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.189.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 192.168.189.2 0.0.0.0 UG 100 0 0 eth0

    Read the article

  • Ubuntu 10.10 - disaster - what other linux for beginner?

    - by A-ha
    Guys, I've tried to install ubuntu (desktop and notebook ed) on my laptop and unortunately I have to say that as despite the fact that installation process supposed to be easy I couldn't finish installation of this system - didn't detect my keyboard or rather lost my keyboard as soon as I tried to switch on/off pad on my laptop. After I've discovered that, I started all over again (this time without touching my laptop's pad during installation) and yes, eventually it get to the end of installation. Unfortunately, when I've tried to switch my pad (sometimes I just do not want to use a mouse) the whole system froze. So I had to restart it with the power button and this time I didn't touch pad at all, plugged in mouse and tried to rearrange taskbars according to my liking (all taskbars on the top side of the screen and auto-hide on) and I gave up. It is so unfinished that I just can't be bothered to use it. I would like to have one linux system on my machine so I started googling and most of the links are to either ubuntu (which I just do not want to touch for now) and suse or commercial versions of linux. I do not really mind paying for something (and having experience with ubuntu I'd rather pay and have something pro then get it free and discover that it's unusable). So could someone please provide short list of linux distros which would be appropriate for a beginner, and I don't mind paying for it, I just want it to be a professional product.

    Read the article

  • SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008

    - by pinaldave
    Note: Please read the complete post before taking any actions. This blog post would discuss SHRINKFILE and TRUNCATE Log File. The script mentioned in the email received from reader contains the following questionable code: “Hi Pinal, If you could remember, I and my manager met you at TechEd in Bangalore. We just upgraded to SQL Server 2008. One of our jobs failed as it was using the following code. The error was: Msg 155, Level 15, State 1, Line 1 ‘TRUNCATE_ONLY’ is not a recognized BACKUP option. The code was: DBCC SHRINKFILE(TestDBLog, 1) BACKUP LOG TestDB WITH TRUNCATE_ONLY DBCC SHRINKFILE(TestDBLog, 1) GO I have modified that code to subsequent code and it works fine. But, are there other suggestions you have at the moment? USE [master] GO ALTER DATABASE [TestDb] SET RECOVERY SIMPLE WITH NO_WAIT DBCC SHRINKFILE(TestDbLog, 1) ALTER DATABASE [TestDb] SET RECOVERY FULL WITH NO_WAIT GO Configuration of our server and system is as follows: [Removed not relevant data]“ An email like this that suddenly pops out in early morning is alarming email. Because I am a dead, busy mind, so I had only one min to reply. I wrote down quickly the following note. (As I said, it was a single-minute email so it is not completely accurate). Here is that quick email shared with all of you. “Hi Mr. DBA [removed the name] Thanks for your email. I suggest you stop this practice. There are many issues included here, but I would list two major issues: 1) From the setting database to simple recovery, shrinking the file and once again setting in full recovery, you are in fact losing your valuable log data and will be not able to restore point in time. Not only that, you will also not able to use subsequent log files. 2) Shrinking file or database adds fragmentation. There are a lot of things you can do. First, start taking proper log backup using following command instead of truncating them and losing them frequently. BACKUP LOG [TestDb] TO  DISK = N'C:\Backup\TestDb.bak' GO Remove the code of SHRINKING the file. If you are taking proper log backups, your log file usually (again usually, special cases are excluded) do not grow very big. There are so many things to add here, but you can call me on my [phone number]. Before you call me, I suggest for accuracy you read Paul Randel‘s two posts here and here and Brent Ozar‘s Post here. Kind Regards, Pinal Dave” I guess this post is very much clear to you. Please leave your comments here. As mentioned, this is a very huge subject; I have just touched a tip of the ice-berg and have tried to point to authentic knowledge. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • A basic T4 template for generating Model Metadata in ASP.NET MVC2

    - by rajbk
    I have been learning about T4 templates recently by looking at the awesome ADO.NET POCO entity generator. By using the POCO entity generator template as a base, I created a T4 template which generates metadata classes for a given Entity Data Model. This speeds coding by reducing the amount of typing required when creating view specific model and its metadata. To use this template, Download the template provided at the bottom. Set two values in the template file. The first one should point to the EDM you wish to generate metadata for. The second is used to suffix the namespace and classes that get generated. string inputFile = @"Northwind.edmx"; string suffix = "AutoMetadata"; Add the template to your MVC 2 Visual Studio 2010 project. Once you add it, a number of classes will get added to your project based on the number of entities you have.    One of these classes is shown below. Note that the DisplayName, Required and StringLength attributes have been added by the t4 template. //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------   using System; using System.ComponentModel; using System.ComponentModel.DataAnnotations;   namespace NorthwindSales.ModelsAutoMetadata { public partial class CustomerAutoMetadata { [DisplayName("Customer ID")] [Required] [StringLength(5)] public string CustomerID { get; set; } [DisplayName("Company Name")] [Required] [StringLength(40)] public string CompanyName { get; set; } [DisplayName("Contact Name")] [StringLength(30)] public string ContactName { get; set; } [DisplayName("Contact Title")] [StringLength(30)] public string ContactTitle { get; set; } [DisplayName("Address")] [StringLength(60)] public string Address { get; set; } [DisplayName("City")] [StringLength(15)] public string City { get; set; } [DisplayName("Region")] [StringLength(15)] public string Region { get; set; } [DisplayName("Postal Code")] [StringLength(10)] public string PostalCode { get; set; } [DisplayName("Country")] [StringLength(15)] public string Country { get; set; } [DisplayName("Phone")] [StringLength(24)] public string Phone { get; set; } [DisplayName("Fax")] [StringLength(24)] public string Fax { get; set; } } } The gen’d class can be used from your project by creating a partial class with the entity name and setting the MetadataType attribute.namespace MyProject.Models{ [MetadataType(typeof(CustomerAutoMetadata))] public partial class Customer { }} You can also copy the code in the metadata class generated and create your own ViewModel class. Note that the template is super basic  and does not take into account complex properties. I have tested it with the Northwind database. This is a work in progress. Feel free to modify the template to suite your requirements. Standard disclaimer follows: Use At Your Own Risk, Works on my machine running VS 2010 RTM/ASP.NET MVC 2 AutoMetaData.zip Mr. Incredible: Of course I have a secret identity. I don't know a single superhero who doesn't. Who wants the pressure of being super all the time?

    Read the article

  • Markus Zirn, "Big Data with CEP and SOA" @ SOA, Cloud &amp; Service Technology Symposium 2012

    - by JuergenKress
    ORACLE PROMOTIONAL DISCOUNT FOR EXCLUSIVE ORACLE DISCOUNT, ENTER PROMO CODE: DJMXZ370 Early-Bird Registration is Now Open with Special Pricing! Register before July 1, 2012 to qualify for discounts. Visit the Registration page for details. The International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). Big Data with CEP and SOA - September 25, 2012 - 14:15 Speaker: Markus Zirn, Oracle and Baz Kuthi, Avocent The "Big Data" trend is driving new kinds of IT projects that process machine-generated data. Such projects store and mine using Hadoop/ Map Reduce, but they also analyze streaming data via event-driven patterns, which can be called "Fast Data" complementary to "Big Data". This session highlights how "Big Data" and "Fast Data" design patterns can be combined with SOA design principles into modern, event-driven architectures. We will describe specific architectures that combines CEP, Distributed Caching, Event-driven Network, SOA Composites, Application Development Framework, as well as Hadoop. Architecture patterns include pre-processing and filtering event streams as close as possible to the event source, in memory master data for event pattern matching, event-driven user interfaces as well as distributed event processing. Focus is on how "Fast Data" requirements are elegantly integrated into a traditional SOA architecture. Markus Zirn is Vice President of Product Management covering Oracle SOA Suite, SOA Governance, Application Integration Architecture, BPM, BPM Solutions, Complex Event Processing and UPK, an end user learning solution. He is the author of “The BPEL Cookbook” (rated best book on Services Oriented Architecture in 2007) as well as “Fusion Middleware Patterns”. Previously, he was a management consultant with Booz Allen & Hamilton’s High Tech practice in Duesseldorf as well as San Francisco and Vice President of Product Marketing at QUIQ. Mr. Zirn holds a Masters of Electrical Engineering from the University of Karlsruhe and is an alumnus of the Tripartite program, a joint European degree from the University of Karlsruhe, Germany, the University of Southampton, UK, and ESIEE, France. KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. CONFERENCE THEMES & TRACKS Cloud Computing Architecture & Patterns New SOA & Service-Orientation Practices & Models Emerging Service Technology Innovation Service Modeling & Analysis Techniques Service Infrastructure & Virtualization Cloud-based Enterprise Architecture Business Planning for Cloud Computing Projects Real World Case Studies Semantic Web Technologies (with & without the Cloud) Governance Frameworks for SOA and/or Cloud Computing Projects Service Engineering & Service Programming Techniques Interactive Services & the Human Factor New REST & Web Services Tools & Techniques Oracle Specialized SOA & BPM Partners Oracle Specialized partners have proven their skills by certifications and customer references. To find a local Specialized partner please visit http://solutions.oracle.com SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Markus Zirn,SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Prepping the Raspberry Pi for Java Excellence (part 1)

    - by HecklerMark
    I've only recently been able to begin working seriously with my first Raspberry Pi, received months ago but hastily shelved in preparation for JavaOne. The Raspberry Pi and other diminutive computing platforms offer a glimpse of the potential of what is often referred to as the embedded space, the "Internet of Things" (IoT), or Machine to Machine (M2M) computing. I have a few different configurations I want to use for multiple Raspberry Pis, but for each of them, I'll need to perform the following common steps to prepare them for their various tasks: Load an OS onto an SD card Get the Pi connected to the network Load a JDK I've been very happy to see good friend and JFXtras teammate Gerrit Grunwald document how to do these things on his blog (link to article here - check it out!), but I ran into some issues configuring wi-fi that caused me some needless grief. Not knowing if any of the pitfalls were caused by my slightly-older version of the Pi and not being able to find anything specific online to help me get past it, I kept chipping away at it until I broke through. The purpose of this post is to (hopefully) help someone else recognize the same issues if/when they encounter them and work past them quickly. There is a great resource page here that covers several ways to get the OS on an SD card, but here is what I did (on a Mac): Plug SD card into reader on/in Mac Format it (FAT32) Unmount it (diskutil unmountDisk diskn, where n is the disk number representing the SD card) Transfer the disk image for Debian to the SD card (dd if=2012-08-08-wheezy-armel.img of=/dev/diskn bs=1m) Eject the card from the Mac (diskutil eject diskn) There are other ways, but this is fairly quick and painless, especially after you do it several times. Yes, I had to do that dance repeatedly (minus formatting) due to the wi-fi issues, as it kept killing the ability of the Pi to boot. You should be able to dramatically reduce the number of OS loads you do, though, if you do a few things with regard to your wi-fi. Firstly, I strongly recommend you purchase the Edimax EW-7811Un wi-fi adapter. This adapter/chipset has been proven with the Raspberry Pi, it's tiny, and it's cheap. Avoid unnecessary aggravation and buy this one! Secondly, visit this page for a script and instructions regarding how to configure your new wi-fi adapter with your Pi. Here is the rub, though: there is a missing step. At least for my combination of Pi version, OS version, and uncanny gift of timing and luck there was. :-) Here is the sequence of steps I used to make the magic happen: Plug your newly-minted SD card (with OS) into your Pi and connect a network cable (for internet connectivity) Boot your Pi. On the first boot, do the following things: Opt to have it use all space on the SD card (will require a reboot eventually) Disable overscan Set your timezone Enable the ssh server Update raspi-config Reboot your Pi. This will reconfigure the SD to use all space (see above). After you log in (UID: pi, password: raspberry), upgrade your OS. This was the missing step for me that put a merciful end to the repeated SD card re-imaging and made the wi-fi configuration trivial. To do so, just type sudo apt-get upgrade and give it several minutes to complete. Pour yourself a cup of coffee and congratulate yourself on the time you've just saved.  ;-) With the OS upgrade finished, now you can follow Mr. Engman's directions (to the letter, please see link above), download his script, and let it work its magic. One aside: I plugged the little power-sipping Edimax directly into the Pi and it worked perfectly. No powered hub needed, at least in my configuration. To recap, that OS upgrade (at least at this point, with this combination of OS/drivers/Pi version) is absolutely essential for a smooth experience. Miss that step, and you're in for hours of "fun". Save yourself! I'll pick up next time with more of the Java side of the RasPi configuration, but as they say, you have to cross the moat to get into the castle. Hopefully, this will help you do just that. Until next time! All the best, Mark 

    Read the article

  • Copy TFS Build Definitions between Projects and Collections

    - by Jakob Ehn
    Originally posted on: http://geekswithblogs.net/jakob/archive/2014/06/05/copy-tfs-build-definitions-between-projects-and-collections.aspxThe last couple of years it has become apparent that using multiple team projects in TFS is generally a bad idea. There are of course exceptions to this, but there are a lot ot things that becomes much easier to do when you put all of your projects and team in the same team project. Fellow ALM MVP Martin Hinshelwood has blogged about this several times, as well as other people in the community. In particular, using the backlog and portfolio management tools makes much more sense when everything is located in the same team project. Consolidating multiple team projects into one is not that easy unfortunately, it involves migrating source code, work items, reports etc.  Another thing that also need to be migrated is build definitions. It is possible to clone build definitions within the same team project using the TFS power tools. The Community TFS Build Manager also lets you clone build definitions to other team projects. But there is no tool that allows you to clone/copy a build definition to another collection. So, I whipped up a simple console application that let you do this. The tool can be downloaded from https://onedrive.live.com/redir?resid=EE034C9F620CD58D!8162&authkey=!ACTr56v1QVowzuE&ithint=file%2c.zip   Using CopyTFSBuildDefinitions You use the tool like this: CopyTFSBuildDefinitions  SourceCollectionUrl  SourceTeamProject  BuildDefinitionName  DestinationCollectionUrl  DestinationTeamProject [NewDefinitionName] Arguments SourceCollectionUrl The URL to the TFS collection that contains the team project with the build definition that you want to copy SourceTeamProject The name of the team project that contains the build definition BuildDefinitionName Name of the build definition DestinationCollectionUrl The URL to the TFS collection that contains the team project that you want to copy your build definition to DestinationTeamProject The name of the team project in the destination collection NewDefinitionName (Optional) Use this to override the name of the new build definition. If you don’t specify this, the name will the same as the original one Example: CopyTFSBuildDefinitions  https://jakob.visualstudio.com DemoProject  WebApplication.CI https://anotheraccount.visualstudio.com     Notes Since we are (potentially) create a build definition in a new collection, there is no guarantee that the various paths that are defined in the build definition exist in the new collection. For example, a build definition refers to server paths in TFVC or repos + branches in TFGit. It also refers to build controllers that definitely don’t exist in the new collection. So there will be some cleanup to do after you copy your build definitions. You can fix some of these using the Community TFS Build Manager, for example it is very easy to apply the correct build controller to a set of build definitions The problem stated above also applies to build process templates. However, the tool tries to find a build process template in the new team project with the same file name as the one that existed in the old team project. If it finds one, it will be used for the new build definition. Otherwise is will use the default build template If you want to run the tool for many build definitions, you can use this SQL scripts, compliments of Mr. Scrum/ALM MVP Richard Hundhausen to generate the necessary commands: USE Tfs_Collection GO SELECT 'CopyTFSBuildDefinitions.exe http://SERVER:8080/tfs/collection "' + P.ProjectName + '" "' + REPLACE(BD.DefinitionName,'\','') + '" http://NEWSERVER:8080/tfs/COLLECTION TEAMPROJECT'   FROM tbl_Project P        INNER JOIN tbl_BuildGroup BG on BG.TeamProject = P.ProjectUri        INNER JOIN tbl_BuildDefinition BD on BD.GroupId = BG.GroupId   ORDER BY P.ProjectName, BD.DefinitionName   Hope that helps, let me know if you have any problems with the tool or if you find it useful

    Read the article

  • MIXing it Up a Bit

    - by andrewbrust
    Another March, another MIX.  For the fifth year running now, Microsoft has chosen to put on a conference aimed less at software development, per se, and more at the products, experiences and designs that software development can generate.  In all four prior MIX events, the focus of the show, its keynotes and breakout sessions has been on Web products.  On day 1 of MIX 2010 that focus shifted to Windows Phone 7 Series (WP7). What little we had seen of WP7 had been shown to us in a keynote presentation, given by Microsoft’s Joe Belfiore, at the Mobile World Congress in Barcelona, Spain last month.  And today, Mr. Belfiore reprised his showmanship for the MIX 2010 audience.  Joe showed us the ins and outs of WP7 and, in a breakout session, even gave us a sneak peek of Office (specifically, Excel) on WP7.  We didn’t get to see that one month ago in Barcelona, nor did get to see email messages opened for reading, which we saw today. But beyond a tour of the phone itself, impressive though that is, we got to see apps running on it.  Those apps included Associated Press news, Seesmic (a major Twitter client) and Foursquare (a social media darling).  All three ran, ran well, and looked markedly different and better from their corresponding versions on iPhone and Android.  And the games we saw looked even better. To me though, the best demos involved the creation of WP7 apps, using Silverlight in Visual Studio and Expression Blend.  These demos were so effective because they showed important apps being built in very few steps, and by Microsoft executives to boot.  Scott Guthrie showed us how to build a Twitter API app in Visual Strudio.   Jon Harris showed us how to build a photo management and viewer application in Expression Blend, using virtually no code.  Demos of apps built from scratch to F5 without the benefit of a teacher, could be challenging.  But they went off fine, without a hitch and without a ton of opaque, generated code.  Everything written, be it C# or XAML, was easily understood, and the results were impressive. That means lots of developers can do this, and I think it means a lot will.  What I’ve seen, thus far, of iPhone and Android development looks very tedious by comparison.  Development for those platforms involve a collection of tools that integrate only to a point.  Dev work for WP7 involves use of Visual Studio, Silverlight and the same debugging experience .NET developers already know.  This was very exciting for me. All the demos harkened back to days of building apps for with Visual Basic…design the front-end, put in code-behind and then hit F5.  And that makes sense, because the phone platform, and the PC of the early 90s are both, essentially, client OS machines.  The Web was minimal and the “device” was everything. Same is true of this phone.  It’s a client app contraption that fits in your pocket. And if the platforms are comparable, hopefully so too will be the draw of ease-of-development.   WP7 has the potential to make mobile developers want to switch over, and to convince enterprise developers to get into the phone scene.  Will this propel the new phone platform to new heights, and restore Microsoft’s competiveness in the mobile arena? I hope so.  I think so.  And if Microsoft uses developers to build themselves a victory, that would be beneficial and would show that Microsoft has learned from its failures, as well as its successes.  Today I saw a few beautiful apps.  Tomorrow I hope I see a slew of others; maybe not as polished, but plentiful, attractive and stable.  That would be a victory for Microsoft, and for developers.  And it would show everyone else that developers are the kingmakers.  They need cheap, efficient dev tools and lots of respect.  Microsoft has always been the company to provide that.  Hopefully, with WP7, they will return to that persona and see how very timeless it is.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >