Search Results

Search found 1057 results on 43 pages for 'cyber monday'.

Page 35/43 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • What's Happening in Business Analytics at OpenWorld 2012?

    - by jmorourke
    Oracle OpenWorld 2012 is rapidly approaching on September 30th when we take over the city of San Francisco for five days.  The Business Analytics this year is our strongest ever with over 150 EPM, BI, Analytics and Data Warehousing sessions delivered by Oracle, our customers and partners.  We’ll also have Hands-On Labs, 20 demo pods dedicated to Business Analytics products, and over 30 partners exhibiting their solutions.  So what’s hot in the Business Analytics program at OpenWorld?  Here are some of the “can’t miss” sessions at this year’s conference: The EPM and BI general sessions, led by SVP of Product Development Balaji Yelamanchili will highlight what’s new provide a view into Oracle’s EPM, BI and Analytics strategies.  Both sessions are scheduled on Monday, October 1st. Thursday Keynote:  See More, Act Faster:  Oracle Business Analytics, led by Oracle President Mark Hurd, will provide a view into Oracle’s strategy for Business Analytics, especially engineered systems designed to provide extreme performance for the most rigorous analytic tasks. Superfast Business Intelligence with Oracle Exalytics.  Hear about various business intelligence scenarios in which Oracle Exalytics provides exemplary value—from operational reporting and prepackaged applications to analytics on unstructured data. Turn Insights into Real-Time Actions with Oracle Business Intelligence Mobile.  Learn how Oracle Business Intelligence Mobile enables organizations to deliver relevant information and turn insight into real-time action, no matter where employees are located. Empowering the Business User: Introduction to Oracle Endeca Information Discovery.  Find out how you can find fast answers to the new questions that confront your business every day, while avoiding the confusion and inconsistencies brought about by spreadsheets and desktop tools. Big Data:  The Big Story.  Learn how to harness big data, your existing data, and predictive analytics to make better decisions in an environment of rapid shifts in behavior and instant feedback.  Learn about the technologies that constitute a big data architecture, how to leverage and implement advanced analytics for real-time decisions, and the tools needed to know the unknown. Planning at the Speed of Business with Oracle Exalytics.  Learn how Oracle Hyperion Planning leverages the power of Oracle Exalytics to do planning faster, with more detail and more users than ever. For more details on these and other Business Analytics sessions at OpenWorld, download the Focus On Business Analytics program guide at:  http://www.oracle.com/openworld/focus-on/index.html We look forward to seeing you in San Francisco!

    Read the article

  • Down to the Wire - Yet More Solaris Things to See at OpenWorld (and JavaOne!)

    - by Larry Wake
    San Francisco is bracing for the annual invasion. The airport's jammed, the tweets are flying, and the numbers are crazy: more than 50,000 attendees and 2,500+ sessions, taking over Moscone Convention Center, two streets, Union Square, and seemingly every hotel in town (98,000 hotel room nights). So yeah, it's busy. And it's not just OpenWorld--we've also got JavaOne, MySQL Connect, and four other sub-events going on as well. Speaking of JavaOne, you can find Solaris-related activity there, too -- I've highlighted one hands-on lab below. Here's a last pre-event roundup of activities for consideration; enjoy the show(s)! (Remember, Schedule Builder is your friend; use it with the session numbers below to register.) Monday, October 1st: 3:15 PM - General Session: Accelerate Your Business with the Oracle Hardware Advantage(GEN9691, Moscone North Hall D) John Fowler, head of Oracle's Systems organization, will talk about Oracle hardware technology and how it's co-engineered with other key technologies, including Oracle Solaris. Tuesday, October 2nd: 10:15 AM - Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC(CON4431, Moscone South 270)Get the birds-eye lowdown (whatever that means) on how U.S. Cellular  built its Infrastructure as a Service (IaaS) cloud delivery platform with Oracle’s SPARC T4 servers, Oracle Solaris 11, Oracle Solaris Cluster 4, and Oracle VM Server for SPARC. The session covers the high-level design, business case made, implementation details, and lessons learned. 11:45 AM - Oracle Solaris 11 Panel: Insights and Directions from Oracle Solaris Core Engineering(CON8790, Moscone South 252) This has been one of the livelier Solaris-related sessions in years past (and I'm not saying that just because I get to moderate it this year). A panel of core engineers responsible for a wide range of key Solaris technologies will talk about some of the interesting work they've been doing -- but mostly we keep time open for the panel to take questions from attendees, because that's the fun part. Wednesday, October 3rd: 10:00 AM - Tracing Your Java Application Tuning on Oracle Solaris with DTrace(HOL10214, Hilton San Francisco, Franciscan A/B/C/D) This JavaOne hands-on lab will show how to use the DTrace framework to dynamically trace your Java applications on Oracle Solaris and uncover new tuning opportunities. Thursday, October 4th: 12:45 PM - Oracle Solaris 11: Optimized for Oracle Database, Oracle WebLogic Server, and Java(CON8800, Moscone South 252) Explore how Oracle Solaris 11 has been built to be the best platform for the cloud and enterprise applications, with built-in optimizations to improve performance and deliver unique functionality with Oracle Database, Oracle WebLogic Server, and Java.

    Read the article

  • Cannot run "Automation Anywhere" exe files from console (session 0) on Windows Server 2003 64 bit

    - by Tyler
    I have a simple exe created from an Automation Anywhere task that displays a message box saying hello world. I created this simple exe just for debugging the following issue. When I log in to the console (session 0), and run the Automation Anywhere created executable, it starts to run the task, it shows up in the applications and processes list in the task manager and it shows the two "loading..." windows briefly on the screen, just like normal. But after that, nothing happens... the "hello world" message does not show up. The exe is done and is removed from the application and process list in the task manager. The user I am logged in as, has admin rights and the machine uses "autologin" to automatically log in using this profile when it starts up. If I right click on the exe and "run as" another admin user, the exe runs properly, showing the "hello world" message. Also, if I log into the server in a new session, with the original user (the one that has the problems in session 0), and then run the exe, it runs properly and shows the "hello world". It works fine in any session other than the console session. There is something about the console session that is causing the exe not to run properly... even though it does appear to start running the exe. I should also mention that everything was working fine until Monday at midnight, after which none of the executables could be run successfully. Nothing was changed on the server and no updates were installed. I have since installed windows updates, but that didn't change anything. Looking for some advice on how to get these executables working in the console session again. Thanks!

    Read the article

  • Snort's problems in generating alert from Darpa 1998 intrusion detection dataset.

    - by manofseven2
    Hi. I’m working on DARPA 1998 intrusion detection dataset. When I run snort on this dataset (outside.tcpdump file), snort don’t generate complete list of alerts. It means snort start from last few hours of tcpdump file and generate alerts about this section of file and all of packets in first hours are ignored. Another problem in generatin alert is in time stamp of generated alerts. This means when I run snort on a specific day of dataset, snort insert incorrect time stamp for that alert. The configuration and command line statement and other information about my research are: Snort version: 2.8.6 Operating system: windows XP Rule version: snortrules-snapshot-2860_s.tar.gz -———————————————————————— Command line: snort_2.8.6 c D:\programs\Snort_2.8.6\snort\etc\snort.conf -r d:\users\amir\docs\darpa\training_data\week_3\monday\outside.tcpdump -l D:\users\amir\current-task\research\thesis\snort\890230 -————————————————————————— Snort.config Hi. I'm working on DARPA 1998 intrusion detection dataset. When I run snort on this dataset (outside.tcpdump file), snort don't generate complete list of alerts. It means snort start from last few hours of tcpdump file and generate alerts about this section of file and all of packets in first hours are ignored. Another problem in generatin alert is in time stamp of generated alerts. This means when I run snort on a specific day of dataset, snort insert incorrect time stamp for that alert. The configuration and command line statement and other information about my research are: Snort version: 2.8.6 Operating system: windows XP Rule version: snortrules-snapshot-2860_s.tar.gz Command line: snort_2.8.6 -c D:\programs\Snort_2.8.6\snort\etc\snort.conf -r d:\users\amir\docs\darpa\training_data\week_3\monday\outside.tcpdump -l D:\users\amir\current-task\research\thesis\snort\890230 Snort.config # Setup the network addresses you are protecting var HOME_NET any # Set up the external network addresses. Leave as "any" in most situations var EXTERNAL_NET any # List of DNS servers on your network var DNS_SERVERS $HOME_NET # List of SMTP servers on your network var SMTP_SERVERS $HOME_NET # List of web servers on your network var HTTP_SERVERS $HOME_NET # List of sql servers on your network var SQL_SERVERS $HOME_NET # List of telnet servers on your network var TELNET_SERVERS $HOME_NET # List of ssh servers on your network var SSH_SERVERS $HOME_NET # List of ports you run web servers on portvar HTTP_PORTS [80,1220,2301,3128,7777,7779,8000,8008,8028,8080,8180,8888,9999] # List of ports you want to look for SHELLCODE on. portvar SHELLCODE_PORTS !80 # List of ports you might see oracle attacks on portvar ORACLE_PORTS 1024: # List of ports you want to look for SSH connections on: portvar SSH_PORTS 22 # other variables, these should not be modified var AIM_SERVERS [64.12.24.0/23,64.12.28.0/23,64.12.161.0/24,64.12.163.0/24,64.12.200.0/24,205.188.3.0/24,205.188.5.0/24,205.188.7.0/24,205.188.9.0/24,205.188.153.0/24,205.188.179.0/24,205.188.248.0/24] var RULE_PATH ../rules var SO_RULE_PATH ../so_rules var PREPROC_RULE_PATH ../preproc_rules # Stop generic decode events: config disable_decode_alerts # Stop Alerts on experimental TCP options config disable_tcpopt_experimental_alerts # Stop Alerts on obsolete TCP options config disable_tcpopt_obsolete_alerts # Stop Alerts on T/TCP alerts config disable_tcpopt_ttcp_alerts # Stop Alerts on all other TCPOption type events: config disable_tcpopt_alerts # Stop Alerts on invalid ip options config disable_ipopt_alerts # Alert if value in length field (IP, TCP, UDP) is greater th elength of the packet # config enable_decode_oversized_alerts # Same as above, but drop packet if in Inline mode (requires enable_decode_oversized_alerts) # config enable_decode_oversized_drops # Configure IP / TCP checksum mode config checksum_mode: all config pcre_match_limit: 1500 config pcre_match_limit_recursion: 1500 # Configure the detection engine See the Snort Manual, Configuring Snort - Includes - Config config detection: search-method ac-split search-optimize max-pattern-len 20 # Configure the event queue. For more information, see README.event_queue config event_queue: max_queue 8 log 3 order_events content_length dynamicpreprocessor directory D:\programs\Snort_2.8.6\snort\lib\snort_dynamicpreprocessor dynamicengine D:\programs\Snort_2.8.6\snort\lib\snort_dynamicengine\sf_engine.dll # path to dynamic rules libraries #dynamicdetection directory /usr/local/lib/snort_dynamicrules preprocessor frag3_global: max_frags 65536 preprocessor frag3_engine: policy windows detect_anomalies overlap_limit 10 min_fragment_length 100 timeout 180 preprocessor stream5_global: max_tcp 8192, track_tcp yes, track_udp yes, track_icmp no preprocessor stream5_tcp: policy windows, detect_anomalies, require_3whs 180, \ overlap_limit 10, small_segments 3 bytes 150, timeout 180, \ ports client 21 22 23 25 42 53 79 109 110 111 113 119 135 136 137 139 143 \ 161 445 513 514 587 593 691 1433 1521 2100 3306 6665 6666 6667 6668 6669 \ 7000 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779, \ ports both 80 443 465 563 636 989 992 993 994 995 1220 2301 3128 6907 7702 7777 7779 7801 7900 7901 7902 7903 7904 7905 \ 7906 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 8000 8008 8028 8080 8180 8888 9999 preprocessor stream5_udp: timeout 180 preprocessor http_inspect: global iis_unicode_map unicode.map 1252 compress_depth 20480 decompress_depth 20480 preprocessor http_inspect_server: server default \ chunk_length 500000 \ server_flow_depth 0 \ client_flow_depth 0 \ post_depth 65495 \ oversize_dir_length 500 \ max_header_length 750 \ max_headers 100 \ ports { 80 1220 2301 3128 7777 7779 8000 8008 8028 8080 8180 8888 9999 } \ non_rfc_char { 0x00 0x01 0x02 0x03 0x04 0x05 0x06 0x07 } \ enable_cookie \ extended_response_inspection \ inspect_gzip \ apache_whitespace no \ ascii no \ bare_byte no \ directory no \ double_decode no \ iis_backslash no \ iis_delimiter no \ iis_unicode no \ multi_slash no \ non_strict \ u_encode yes \ webroot no preprocessor rpc_decode: 111 32770 32771 32772 32773 32774 32775 32776 32777 32778 32779 no_alert_multiple_requests no_alert_large_fragments no_alert_incomplete preprocessor bo preprocessor ftp_telnet: global inspection_type stateful encrypted_traffic no preprocessor ftp_telnet_protocol: telnet \ ayt_attack_thresh 20 \ normalize ports { 23 } \ detect_anomalies preprocessor ftp_telnet_protocol: ftp server default \ def_max_param_len 100 \ ports { 21 2100 3535 } \ telnet_cmds yes \ ignore_telnet_erase_cmds yes \ ftp_cmds { ABOR ACCT ADAT ALLO APPE AUTH CCC CDUP } \ ftp_cmds { CEL CLNT CMD CONF CWD DELE ENC EPRT } \ ftp_cmds { EPSV ESTA ESTP FEAT HELP LANG LIST LPRT } \ ftp_cmds { LPSV MACB MAIL MDTM MIC MKD MLSD MLST } \ ftp_cmds { MODE NLST NOOP OPTS PASS PASV PBSZ PORT } \ ftp_cmds { PROT PWD QUIT REIN REST RETR RMD RNFR } \ ftp_cmds { RNTO SDUP SITE SIZE SMNT STAT STOR STOU } \ ftp_cmds { STRU SYST TEST TYPE USER XCUP XCRC XCWD } \ ftp_cmds { XMAS XMD5 XMKD XPWD XRCP XRMD XRSQ XSEM } \ ftp_cmds { XSEN XSHA1 XSHA256 } \ alt_max_param_len 0 { ABOR CCC CDUP ESTA FEAT LPSV NOOP PASV PWD QUIT REIN STOU SYST XCUP XPWD } \ alt_max_param_len 200 { ALLO APPE CMD HELP NLST RETR RNFR STOR STOU XMKD } \ alt_max_param_len 256 { CWD RNTO } \ alt_max_param_len 400 { PORT } \ alt_max_param_len 512 { SIZE } \ chk_str_fmt { ACCT ADAT ALLO APPE AUTH CEL CLNT CMD } \ chk_str_fmt { CONF CWD DELE ENC EPRT EPSV ESTP HELP } \ chk_str_fmt { LANG LIST LPRT MACB MAIL MDTM MIC MKD } \ chk_str_fmt { MLSD MLST MODE NLST OPTS PASS PBSZ PORT } \ chk_str_fmt { PROT REST RETR RMD RNFR RNTO SDUP SITE } \ chk_str_fmt { SIZE SMNT STAT STOR STRU TEST TYPE USER } \ chk_str_fmt { XCRC XCWD XMAS XMD5 XMKD XRCP XRMD XRSQ } \ chk_str_fmt { XSEM XSEN XSHA1 XSHA256 } \ cmd_validity ALLO \ cmd_validity EPSV \ cmd_validity MACB \ cmd_validity MDTM \ cmd_validity MODE \ cmd_validity PORT \ cmd_validity PROT \ cmd_validity STRU \ cmd_validity TYPE preprocessor ftp_telnet_protocol: ftp client default \ max_resp_len 256 \ bounce yes \ ignore_telnet_erase_cmds yes \ telnet_cmds yes preprocessor smtp: ports { 25 465 587 691 } \ inspection_type stateful \ normalize cmds \ normalize_cmds { MAIL RCPT HELP HELO ETRN EHLO EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET SEND SAML SOML AUTH TURN DATA QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ max_command_line_len 512 \ max_header_line_len 1000 \ max_response_line_len 512 \ alt_max_command_line_len 260 { MAIL } \ alt_max_command_line_len 300 { RCPT } \ alt_max_command_line_len 500 { HELP HELO ETRN EHLO } \ alt_max_command_line_len 255 { EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET } \ alt_max_command_line_len 246 { SEND SAML SOML AUTH TURN ETRN DATA RSET QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ valid_cmds { MAIL RCPT HELP HELO ETRN EHLO EXPN VRFY ATRN SIZE BDAT DEBUG EMAL ESAM ESND ESOM EVFY IDENT NOOP RSET SEND SAML SOML AUTH TURN DATA QUIT ONEX QUEU STARTTLS TICK TIME TURNME VERB X-EXPS X-LINK2STATE XADR XAUTH XCIR XEXCH50 XGEN XLICENSE XQUE XSTA XTRN XUSR } \ xlink2state { enabled } preprocessor ssh: server_ports { 22 } \ autodetect \ max_client_bytes 19600 \ max_encrypted_packets 20 \ max_server_version_len 100 \ enable_respoverflow enable_ssh1crc32 \ enable_srvoverflow enable_protomismatch preprocessor dcerpc2: memcap 102400, events [co ] preprocessor dcerpc2_server: default, policy WinXP, \ detect [smb [139,445], tcp 135, udp 135, rpc-over-http-server 593], \ autodetect [tcp 1025:, udp 1025:, rpc-over-http-server 1025:], \ smb_max_chain 3 preprocessor dns: ports { 53 } enable_rdata_overflow preprocessor ssl: ports { 443 465 563 636 989 992 993 994 995 7801 7702 7900 7901 7902 7903 7904 7905 7906 6907 7908 7909 7910 7911 7912 7913 7914 7915 7916 7917 7918 7919 7920 }, trustservers, noinspect_encrypted # SDF sensitive data preprocessor. For more information see README.sensitive_data preprocessor sensitive_data: alert_threshold 25 output alert_full: alert.log output database: log, mysql, user=root password=123456 dbname=snort host=localhost include classification.config include reference.config include $RULE_PATH/local.rules include $RULE_PATH/attack-responses.rules include $RULE_PATH/backdoor.rules include $RULE_PATH/bad-traffic.rules include $RULE_PATH/chat.rules include $RULE_PATH/content-replace.rules include $RULE_PATH/ddos.rules include $RULE_PATH/dns.rules include $RULE_PATH/dos.rules include $RULE_PATH/exploit.rules include $RULE_PATH/finger.rules include $RULE_PATH/ftp.rules include $RULE_PATH/icmp.rules include $RULE_PATH/icmp-info.rules include $RULE_PATH/imap.rules include $RULE_PATH/info.rules include $RULE_PATH/misc.rules include $RULE_PATH/multimedia.rules include $RULE_PATH/mysql.rules include $RULE_PATH/netbios.rules include $RULE_PATH/nntp.rules include $RULE_PATH/oracle.rules include $RULE_PATH/other-ids.rules include $RULE_PATH/p2p.rules include $RULE_PATH/policy.rules include $RULE_PATH/pop2.rules include $RULE_PATH/pop3.rules include $RULE_PATH/rpc.rules include $RULE_PATH/rservices.rules include $RULE_PATH/scada.rules include $RULE_PATH/scan.rules include $RULE_PATH/shellcode.rules include $RULE_PATH/smtp.rules include $RULE_PATH/snmp.rules include $RULE_PATH/specific-threats.rules include $RULE_PATH/spyware-put.rules include $RULE_PATH/sql.rules include $RULE_PATH/telnet.rules include $RULE_PATH/tftp.rules include $RULE_PATH/virus.rules include $RULE_PATH/voip.rules include $RULE_PATH/web-activex.rules include $RULE_PATH/web-attacks.rules include $RULE_PATH/web-cgi.rules include $RULE_PATH/web-client.rules include $RULE_PATH/web-coldfusion.rules include $RULE_PATH/web-frontpage.rules include $RULE_PATH/web-iis.rules include $RULE_PATH/web-misc.rules include $RULE_PATH/web-php.rules include $RULE_PATH/x11.rules include threshold.conf -————————————————————————————- Can anyone help me to solve this problem? Thanks.

    Read the article

  • SQL Server transaction log backups,

    - by krimerd
    Hi there, I have a question regarding the transaction log backups in sql server 2008. I am currently taking full backups once a week (Sunday) and transaction log backups daily. I put full backup in folder1 on Sunday and then on Monday I also put the 1st transaction log backup in the same folder. On tuesday, before I take the 2nd transaction log backup I move the first transaction log backup from folder1 an put it into folder2 and then I take the 2nd transaction log backup and put it in the folder1. Same thing on Wed, Thurs and so on. Basicaly in folder1 I always have the latest full backup and the latest transaction log backup while the other transaction log backups are in folder2. My questions is, when sql server is about to take, lets say 4th (Thursday) transaction log backup, does it look for the previous transac log backups (1st, 2nd, and 3rd) so that this new backup will only include the transactions from the last backup or it has some other way of knowing whether there are other transac log backups. Basically, I am asking this because all my transaction log backups seem to be about the same size and I thought that their size will depend on the amount of transactions since the last transaction log backup. Example: If you have a, lets say, full backup and then you take a transac log backup and this transac log backup is lets say 200 MB and now you immediatelly take another transac log backup, this last transac log backup should be considerably smaller than the first one because no or almost no transaction occured between these two backups, right? At least, that's what I've been assuming. What happens in my case is that this second backup is pretty much the same size as the first one and I am wondering if the reason for that is because I moved the first transac log backup to a different folder so now sql server thinks that all I have is just a full backup and it then gets all the transactions that happened since the full backup and puts it in the 2nd transac log backup. Can anyone please explain if my assumptions are right? Thanks...

    Read the article

  • How to implement Survey page using ASP.NET MVC?

    - by Aleks
    I need to implement the survey page using ASP.NET MVC (v.4) That functionality has already been implemented in our project using ASP.NET WebForms. (I really searched a lot for real examples of similar functionality implemented via MVC, but failed) Goal: staying on the same page (in webforms -'Survey.aspx') each time user clicks 'Next Page', load next bunch of questions (controls) which user is going to answer. Type of controls in questions are defined only in run-time (retrieved from Data Base). To explain better the question I manually created (rather simple) mark-up below of 'two' pages (two loads of controls): <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Survey.aspx.cs" Inherits="WebSite.Survey" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div><h2>Internal Survey</h2></div> <div><h3>Page 1</h3></div> <div style="padding-bottom: 10px"><div><b>Did you have internet disconnections during last week?</b></div> <asp:RadioButtonList ID="RadioButtonList1" runat="server"> <asp:ListItem>Yes</asp:ListItem> <asp:ListItem>No</asp:ListItem> </asp:RadioButtonList> </div> <div style="padding-bottom: 10px"><div><b>Which days of the week suit you best for meeting up ?</b></div> <asp:CheckBoxList ID="CheckBoxList1" runat="server"> <asp:ListItem>Monday</asp:ListItem> <asp:ListItem>Tuesday</asp:ListItem> <asp:ListItem>Wednesday</asp:ListItem> <asp:ListItem>Thursday</asp:ListItem> <asp:ListItem>Friday</asp:ListItem> </asp:CheckBoxList> </div> <div style="padding-bottom: 10px"> <div><b>How satisfied are you with your job? </b></div> <asp:RadioButtonList ID="RadioButtonList2" runat="server"> <asp:ListItem>Very Good</asp:ListItem> <asp:ListItem>Good</asp:ListItem> <asp:ListItem>Bad</asp:ListItem> <asp:ListItem>Very Bad</asp:ListItem> </asp:RadioButtonList> </div> <div style="padding-bottom: 10px"> <div><b>How satisfied are you with your direct supervisor ? </b></div> <asp:RadioButtonList ID="RadioButtonList3" runat="server"> <asp:ListItem>Not Satisfied</asp:ListItem> <asp:ListItem>Somewhat Satisfied</asp:ListItem> <asp:ListItem>Neutral</asp:ListItem> <asp:ListItem>Satisfied</asp:ListItem> <asp:ListItem>Very Satisfied</asp:ListItem> </asp:RadioButtonList> </div> <div style="padding-bottom: 10px"> <asp:Button ID="Button1" runat="server" Text="Next Page" onclick="Button1_Click" /> </div> </form> </body> </html> PAGE 2 <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Survey.aspx.cs" Inherits="WebSite.Survey" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div><h2>Internal Survey</h2></div> <div><h3>Page 2</h3></div> <div style="padding-bottom: 10px"><div><b>Did admininstators fix your internet connection in time ?</b></div> <asp:RadioButtonList ID="RadioButtonList1" runat="server"> <asp:ListItem>Yes</asp:ListItem> <asp:ListItem>No</asp:ListItem> </asp:RadioButtonList> </div> <div style="padding-bottom: 10px"><div><b>What's your overal impression about the job ?</b></div> <asp:TextBox ID="TextBox1" runat="server" Height="88px" Width="322px"></asp:TextBox> </div> <div style="padding-bottom: 10px"> <div><b>Select day which best suits you for admin support ? </b></div> <asp:DropDownList ID="DropDownList1" runat="server"> <asp:ListItem>Select day</asp:ListItem> <asp:ListItem>Monday</asp:ListItem> <asp:ListItem>Wednesday</asp:ListItem> <asp:ListItem>Friday</asp:ListItem> </asp:DropDownList> </div> <div style="padding-bottom: 10px"> <asp:Button ID="Button1" runat="server" Text="Next Page" onclick="Button1_Click" /> </div> </form> </body> </html>

    Read the article

  • Search for files after a relative date using Windows search

    - by Zoredache
    I am looking for a way to save a search that includes a relative date. Specifically I am looking for a way to save a search that matches files that have a modification date that is 7 days ago. I have read the Windows Search Advanced Query Syntax document and I am not seeing a way to say 7 days ago. The numbers and ranges section does mention that relative dates are possible. The problem is that the relative dates described there do not fit the criteria I need. The lastweek almost looks like what I want except if I run a query like after:lastweek on a Monday it will only show my file that have been modified since Sunday at 12:00. The lastweek/lastmonth seem to relative to the start of the week/month which is not what I need. Multi-word relative dates: week, next month, last week, past month, or coming year. The values can also be entered contracted, as follows: thisweek, nextmonth, lastweek, pastmonth, comingyear. One nice thing about saved searches is that they are stored as an XML document and the file format is documented. I am not seeing how to form a correct value for a datetime. If I was able to understand this format, I suspect I could use a text editor and created a saved search that does what I want. Fragment from the examples: <conditions> <condition type="leafCondition" valuetype="System.StructuredQueryType.DateTime" property="System.DateModified" operator="imp" value="R00UUUUUUUUZZXD-30NU" propertyType="wstr" /> </conditions> To summarize I am looking for an answer to one or both of these questions How do I make a query for '7 days ago' using the standard syntax? How is the DateTime stored in a saved search?

    Read the article

  • SQL Server transaction log backups,

    - by krimerd
    Hi there, I have a question regarding the transaction log backups in sql server 2008. I am currently taking full backups once a week (Sunday) and transaction log backups daily. I put full backup in folder1 on Sunday and then on Monday I also put the 1st transaction log backup in the same folder. On tuesday, before I take the 2nd transaction log backup I move the first transaction log backup from folder1 an put it into folder2 and then I take the 2nd transaction log backup and put it in the folder1. Same thing on Wed, Thurs and so on. Basicaly in folder1 I always have the latest full backup and the latest transaction log backup while the other transaction log backups are in folder2. My questions is, when sql server is about to take, lets say 4th (Thursday) transaction log backup, does it look for the previous transac log backups (1st, 2nd, and 3rd) so that this new backup will only include the transactions from the last backup or it has some other way of knowing whether there are other transac log backups. Basically, I am asking this because all my transaction log backups seem to be about the same size and I thought that their size will depend on the amount of transactions since the last transaction log backup. Can anyone please explain if my assumptions are right? Thanks...

    Read the article

  • JVM system time runs faster than HP UNIX OS system time

    - by winston
    Hello I have the following output from a simple debug jsp: Weblogic Startup Since: Friday, October 19, 2012, 08:36:12 AM Database Current Time: Wednesday, December 12, 2012, 11:43:44 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:45:38 AM Line 1 was a recorded variable during WebLogic webapp startup. Line 2 was output from database query select sysdate from dual; Line 3 was output from java code new Date() I have checked from shell date command that line 2 output conforms with OS time. The output of line 3 was mysterious. I don't know how it comes from Java VM. On another machine with same setting, the same jsp output like this: Weblogic Startup Since: Tuesday, December 11, 2012, 02:29:06 PM Database Current Time: Wednesday, December 12, 2012, 11:51:48 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:51:50 AM Another machine: Weblogic Startup Since: Monday, December 10, 2012, 05:00:34 PM Database Current Time: Wednesday, December 12, 2012, 11:52:03 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:52:07 AM Findings: the pattern shows that the longer Weblogic startup, the larger the discrepancy of OS time with JVM time. Anybody could help on HP JVM? On HP UNIX, NTP was done daily. Anyway here comes the server versions: HP-UX machinex B.11.31 U ia64 2426956366 unlimited-user license java version "1.6.0.04" Java(TM) SE Runtime Environment (build 1.6.0.04-jinteg_28_apr_2009_04_46-b00) Java HotSpot(TM) Server VM (build 11.3-b02-jre1.6.0.04-rc2, mixed mode) WebLogic Server Version: 10.3.2.0 Java properties java.runtime.name=Java(TM) SE Runtime Environment java.runtime.version=1.6.0.04-jinteg_28_apr_2009_04_46-b00 java.vendor=Hewlett-Packard Co. java.vendor.url=http\://www.hp.com/go/Java java.version=1.6.0.04 java.vm.name=Java HotSpot(TM) 64-Bit Server VM java.vm.info=mixed mode java.vm.specification.vendor=Sun Microsystems Inc. java.vm.vendor="Hewlett-Packard Company" sun.arch.data.model=64 sun.cpu.endian=big sun.cpu.isalist=ia64r0 sun.io.unicode.encoding=UnicodeBig sun.java.launcher=SUN_STANDARD sun.jnu.encoding=8859_1 sun.management.compiler=HotSpot 64-Bit Server Compiler sun.os.patch.level=unknown os.name=HP-UX os.version=B.11.31

    Read the article

  • Windows 7 - User profile corrupted on standby/hibernate

    - by Dogbert
    I have a friend who uses Windows 7 for her home PC. She has a RAID1 array that is using up-to-date Intel Matrix storage drivers, and the entire array is backed up to a separate internal SATA HDD via Acronis True Image every night. Over the weekends, she lets her machine go into suspend after 4 hours of inactivity, and then later into hibernation after 6 hours of inactivity. Her Acronis backup system does nightly incremental backups, and full backups every Saturday night. She also has AVG Free Antivirus installed which does full scans every Monday. So far, on two occasions, on Sunday morning, her user profile is corrupt. I couldn't find any solution that allowed me to repair her profile, so I end up having to (as suggested by MS Knowledge Base, http://windows.microsoft.com/en-us/windows7/fix-a-corrupted-user-profile ;, Yep, no solution to fixing it, just clobber the whole thing) recreating her profile, then copying over data, recreating her Outlook profile, reconfiguring all third-party applications, etc. It's a real nightmare, and takes 4 hours to do each time. Are there any suggestions on how to resolve this profile corruption? This was happening even before the RAID/Acronis solution was in place, but I thought I'd provide as much information as possible.

    Read the article

  • My New Dell Workstation T-7500 gives electric shock

    - by Dan
    Just bought Dell T-7500 Workstation. When installation technician came to Install the workstation, he got an electric shock when he touched the Start Button. He also got shock when he touched Front Panel. No shock when touching rest of the Chassis. He called Dell Support & tried to troubleshoot by taking out various wires etc but did not help. I touched the same places & I also got shock. We checked everything possible including connecting to various outlets but it didn't solve the problem. Installation subcontractor said that they are not supposed to troubleshoot anything on new system, just install it & they made notes & left. I called my sales guy & it was weekend so he said he will take care of it on Monday. Here are my concerns : (1) Why does that happen ? Although it is a mild shock & it won't kill you but might damage other parts of workstation ? (2) Is this common for workstations ? (3) What should I do ? Of course Dell will try to troubleshoot again but should I let them do that OR ask for a New System ? (4) Wheat would happen if I continue using it till the new system arrives ? Thank You.

    Read the article

  • Backbone.js Adding Model to Collection Issue

    - by jtmgdevelopment
    I am building a test application in Backbone.js (my first app using Backbone). The app goes like this: Load Data from server "Plans" Build list of plans and show to screen There is a button to add a new plan Once new plan is added, add to collection ( do not save to server as of now ) redirect to index page and show the new collection ( includes the plan you just added) My issue is with item 5. When I save a plan, I add the model to the collection then redirect to the initial view. At this point, I fetch data from the server. When I fetch data from the server, this overwrites my collection and my added model is gone. How can I prevent this from happening? I have found a way to do this but it is definitely not the correct way at all. Below you will find my code examples for this. Thanks for the help. PlansListView View: var PlansListView = Backbone.View.extend({ tagName : 'ul', initialize : function() { _.bindAll( this, 'render', 'close' ); //reset the view if the collection is reset this.collection.bind( 'reset', this.render , this ); }, render : function() { _.each( this.collection.models, function( plan ){ $( this.el ).append( new PlansListItemView({ model: plan }).render().el ); }, this ); return this; }, close : function() { $( this.el ).unbind(); $( this.el ).remove(); } });//end NewPlanView Save Method var NewPlanView = Backbone.View.extend({ tagName : 'section', template : _.template( $( '#plan-form-template' ).html() ), events : { 'click button.save' : 'savePlan', 'click button.cancel' : 'cancel' }, intialize: function() { _.bindAll( this, 'render', 'save', 'cancel' ); }, render : function() { $( '#container' ).append( $( this.el ).html(this.template( this.model.toJSON() )) ); return this; }, savePlan : function( event ) { this.model.set({ name : 'bad plan', date : 'friday', desc : 'blah', id : Math.floor(Math.random()*11), total_stops : '2' }); this.collection.add( this.model ); app.navigate('', true ); event.preventDefault(); }, cancel : function(){} }); Router (default method): index : function() { this.container.empty(); var self = this; //This is a hack to get this to work //on default page load fetch all plans from the server //if the page has loaded ( this.plans is defined) set the updated plans collection to the view //There has to be a better way!! if( ! this.plans ) { this.plans = new Plans(); this.plans.fetch({ success: function() { self.plansListView = new PlansListView({ collection : self.plans }); $( '#container' ).append( self.plansListView.render().el ); if( self.requestedID ) self.planDetails( self.requestedID ); } }); } else { this.plansListView = new PlansListView({ collection : this.plans }); $( '#container' ).append( self.plansListView.render().el ); if( this.requestedID ) self.planDetails( this.requestedID ); } }, New Plan Route: newPlan : function() { var plan = new Plan({name: 'Cool Plan', date: 'Monday', desc: 'This is a great app'}); this.newPlan = new NewPlanView({ model : plan, collection: this.plans }); this.newPlan.render(); } FULL CODE ( function( $ ){ var Plan = Backbone.Model.extend({ defaults: { name : '', date : '', desc : '' } }); var Plans = Backbone.Collection.extend({ model : Plan, url : '/data/' }); $( document ).ready(function( e ){ var PlansListView = Backbone.View.extend({ tagName : 'ul', initialize : function() { _.bindAll( this, 'render', 'close' ); //reset the view if the collection is reset this.collection.bind( 'reset', this.render , this ); }, render : function() { _.each( this.collection.models, function( plan ){ $( this.el ).append( new PlansListItemView({ model: plan }).render().el ); }, this ); return this; }, close : function() { $( this.el ).unbind(); $( this.el ).remove(); } });//end var PlansListItemView = Backbone.View.extend({ tagName : 'li', template : _.template( $( '#list-item-template' ).html() ), events :{ 'click a' : 'listInfo' }, render : function() { $( this.el ).html( this.template( this.model.toJSON() ) ); return this; }, listInfo : function( event ) { } });//end var PlanView = Backbone.View.extend({ tagName : 'section', events : { 'click button.add-plan' : 'newPlan' }, template: _.template( $( '#plan-template' ).html() ), initialize: function() { _.bindAll( this, 'render', 'close', 'newPlan' ); }, render : function() { $( '#container' ).append( $( this.el ).html( this.template( this.model.toJSON() ) ) ); return this; }, newPlan : function( event ) { app.navigate( 'newplan', true ); }, close : function() { $( this.el ).unbind(); $( this.el ).remove(); } });//end var NewPlanView = Backbone.View.extend({ tagName : 'section', template : _.template( $( '#plan-form-template' ).html() ), events : { 'click button.save' : 'savePlan', 'click button.cancel' : 'cancel' }, intialize: function() { _.bindAll( this, 'render', 'save', 'cancel' ); }, render : function() { $( '#container' ).append( $( this.el ).html(this.template( this.model.toJSON() )) ); return this; }, savePlan : function( event ) { this.model.set({ name : 'bad plan', date : 'friday', desc : 'blah', id : Math.floor(Math.random()*11), total_stops : '2' }); this.collection.add( this.model ); app.navigate('', true ); event.preventDefault(); }, cancel : function(){} }); var AppRouter = Backbone.Router.extend({ container : $( '#container' ), routes : { '' : 'index', 'viewplan/:id' : 'planDetails', 'newplan' : 'newPlan' }, initialize: function(){ }, index : function() { this.container.empty(); var self = this; //This is a hack to get this to work //on default page load fetch all plans from the server //if the page has loaded ( this.plans is defined) set the updated plans collection to the view //There has to be a better way!! if( ! this.plans ) { this.plans = new Plans(); this.plans.fetch({ success: function() { self.plansListView = new PlansListView({ collection : self.plans }); $( '#container' ).append( self.plansListView.render().el ); if( self.requestedID ) self.planDetails( self.requestedID ); } }); } else { this.plansListView = new PlansListView({ collection : this.plans }); $( '#container' ).append( self.plansListView.render().el ); if( this.requestedID ) self.planDetails( this.requestedID ); } }, planDetails : function( id ) { if( this.plans ) { this.plansListView.close(); this.plan = this.plans.get( id ); if( this.planView ) this.planView.close(); this.planView = new PlanView({ model : this.plan }); this.planView.render(); } else{ this.requestedID = id; this.index(); } if( ! this.plans ) this.index(); }, newPlan : function() { var plan = new Plan({name: 'Cool Plan', date: 'Monday', desc: 'This is a great app'}); this.newPlan = new NewPlanView({ model : plan, collection: this.plans }); this.newPlan.render(); } }); var app = new AppRouter(); Backbone.history.start(); }); })( jQuery );

    Read the article

  • How can I make an encrypted email message into a .p7m file?

    - by Blacklight Shining
    This is a bit complicated, so I'll explain what I'm really trying to do here: I have a Debian server, and I want to automatically email myself certain logs every week. I'm going to use cron and a bash script to copy the logs into a tarball shortly after midnight every Monday. A bash script on my home computer will then download the tarball from the server, along with a file to be used as the body of the email, and call an AppleScript to make a new email message. This is where I'm stuck—I can't find a way to encrypt and sign the email using AppleScript and Apple's mail client. I've noticed that if I put a delay in before sending the message, Mail will automatically set it to be encrypted and signed (as it normally does when I compose a message myself). However, there's no way to be sure of this when the script runs—if something goes wrong there, the script will just blindly send the email unencrypted. My solution there would be to somehow manually create a .p7m file with the tarball and message and attach it to the email the AppleScript creates. Then, when I receive it, Mail will treat it just like any other encrypted message with an attachment (right?) If there's a better way to do this, please let me know. ^^ (Ideally, everything would be done from the server, but there doesn't seem to be a way to send mail automatically without storing a password in plaintext.) (The server is running Debian squeeze; my home computer is a Mac running OS X Lion.)

    Read the article

  • virtualized windows 2003 domain with CentOS 5.3 and poor connectivity

    - by Chris Gow
    Hi: I have a test lab set up running a virtualized windows 2003 domain on a CentOS 5.3(xen) host and am experiencing connectivity problems with guests running on other hosts that are part of the same domain. Here's the setup: On Computer A I have CentOS 5.3 running as the host and have virtualized windows 2003 servers for a primary domain controller, a backup domain controller and an exchange server. The primary domain controller also acts as a WINS and dns server. The windows domain appears on a separate subnet from my company's corporate network. Connectivity to any of the virtualized guests on Computer A is fine (remote desktop, ping, what have you). I have another host computer (Computer B) that also has a virtualized Windows 2003 server guest that is part of the same domain. However, connectivity to that guest is flaky at best. I continuously get at least 60% packet loss when I try to ping the guest, and due to that flakiness I can not access any of the services that it runs (remote desktop, web). Now here's the interesting part. It seems to affect only machines running on a different computer than the domain controller that are in the same domain. On Computer B there is another Windows 2003 guest that is not part of the test domain and is on my corporate network. There's no connectivity issues with that guest machine. The problem does not seem to be specific to Computer B either. I created a test VM on my local computer within the test domain and it exhibits the same behaviour as the guest in Computer B. A couple of items to note: - Host OS on both Computer A and B are the same CentOS 5.3 64 bit - Guest OS is Windows 2003 64 bit and 32 bit (the guest on Computer B is 32 bit) - Guest OSes are all up to date (as of Monday) - Host OS on Computer A was upgraded from CentOS 5.2 to 5.3 Update: Sorry I did not follow up with the comments from below. Computer A and B have been moved to their own dedicated switch and the problem has gone away. I'm not sure what the underlying problem(s) were though

    Read the article

  • Why is dwm.exe using so much memory?

    - by Leonard Challis
    I've scoured the web, but I'm sick of reading "scan your computer for viruses" and "upgrade your RAM" on answers to similar questions to this. I understand that dwm.exe is for (simply put) caching bitmaps for things like Aero-peek and similar, but as far as I have read it shouldn't be using vast amounts of memory. My colleague and I both have 4GB of RAM, Core 2 Duo, blah, blah -- essentially they're pretty capable. His dwm.exe is running at around 30mb, mind is currently running at about half a gig, though it does fluctuate quite a lot. This is the same while running the exact same applications (currently Zend studio, FireFox (with firemin - low memory usage), Outlook). Every so often I will get a notification asking me if I want to switch to Aero Basic because it's using too much memory, and sometimes it will just switch itself to basic and let me know why. I know it's possible to stop it switching, but I want to know why it is using too much memory otherwise it's just papering over the cracks. One thing to add is this seems to have started after a robbery on Monday, where two of my monitors were stolen, and I had to temporarily use a couple of alternative monitors. I am now using brand new monitors but the problem is the same. All drivers installed and working seemingly fine. Any ideas why the usage is so high? We are using windows 7 64-bit Professional.

    Read the article

  • Help trying to figure out why IIS7 is crashing / locking up / denying connections

    - by Pure.Krome
    Hi folks, I've got a pretty busy website that is running on a single web front end machine, on W2K8 + IIS7. Every now and then - eg. maybe monday at 3am or something, then a few days later .. some early morning time .. then nothing for 2 weeks ... etc - the website fails to respond to any client connections. ie. no one can connect to the website. I can remote desktop to the machine, etc no probs. I restart the app pool (the website is running in intergrated mode), still nothing. I try and get a crash dump of the process (it's around 600 mb maybe even more) ... that fails after about a min of trying (and i have plenty of HD space). The only way to fix this issue, is to manually stop the www service and then start it again. The stopping takes a while (a minute?) while starting is nearly instant. I'm at a loss to figure out what part of my code is causing this. At first, I thought it might be a stack overflow because of some error that might be going to the error page, which in turn errors .. rinse repeat boom. But i've had a look at the error page and it feels ok. So, I'm hoping someone might be able to help and say how I can correctly get a proper dump of the IIS process so i can then do some more autopsy on it. I would email Tess Ferrandez (the goddess of crash debuging) but I thought I'd try here before I spam her. Can anyone have any suggestions to how I can figure out how to start to debug this issue?

    Read the article

  • Router reporting failed admin login attempts from home server

    - by jeffora
    I recently noticed in the logs of my home router that it relatively regularly lists the following entry: [admin login failure] from source 192.168.0.160, Monday, June 20,2011 18:13:25 192.168.0.160 is the internal address of my home server, running Windows Home Server 2011. Is there anyway I can find out what specifically is trying to login to the router? Or is there some explanation for this behaviour? (not sure if this belongs here or on superuser...) [Update] I've run both Wireshark and netmon for a while on my home server. Wireshark captured the traffic, but didn't really show anything useful (or nothing I could make use of). A simple HTTP GET request is sent from the server (192.168.0.160) to the router (192.168.0.1), from a seemingly random port (I've seen examples from 50068, 52883), and it appears to do it twice in quick succession (incrementing port by 1), about every hour. Running netstat around the time of the failure didn't show anything (probably too long after anyway). I tried using netmon as it categorises by process, so I thought it might show a corresponding process for the port. Unfortunately, this comes in under the 'unknown' category, meaning it's basically just a slower, less useful Wireshark. I know there's not much to go on here, but does this help in anyway?

    Read the article

  • virtualized windows 2003 domain with CentOS 5.3 and poor connectivity

    - by Chris Gow
    I have a test lab set up running a virtualized windows 2003 domain on a CentOS 5.3(xen) host and am experiencing connectivity problems with guests running on other hosts that are part of the same domain. Here's the setup: On Computer A I have CentOS 5.3 running as the host and have virtualized windows 2003 servers for a primary domain controller, a backup domain controller and an exchange server. The primary domain controller also acts as a WINS and dns server. The windows domain appears on a separate subnet from my company's corporate network. Connectivity to any of the virtualized guests on Computer A is fine (remote desktop, ping, what have you). I have another host computer (Computer B) that also has a virtualized Windows 2003 server guest that is part of the same domain. However, connectivity to that guest is flaky at best. I continuously get at least 60% packet loss when I try to ping the guest, and due to that flakiness I can not access any of the services that it runs (remote desktop, web). Now here's the interesting part. It seems to affect only machines running on a different computer than the domain controller that are in the same domain. On Computer B there is another Windows 2003 guest that is not part of the test domain and is on my corporate network. There's no connectivity issues with that guest machine. The problem does not seem to be specific to Computer B either. I created a test VM on my local computer within the test domain and it exhibits the same behaviour as the guest in Computer B. A couple of items to note: - Host OS on both Computer A and B are the same CentOS 5.3 64 bit - Guest OS is Windows 2003 64 bit and 32 bit (the guest on Computer B is 32 bit) - Guest OSes are all up to date (as of Monday) - Host OS on Computer A was upgraded from CentOS 5.2 to 5.3 Update: Sorry I did not follow up with the comments from below. Computer A and B have been moved to their own dedicated switch and the problem has gone away. I'm not sure what the underlying problem(s) were though

    Read the article

  • Locked out of our Comcast Business Gateway for seemingly no reason

    - by Tyler
    A little backstory... Last fall we migrated our ISP from AT&T to Comcast at one of our offices. At that time, we received a new modem/router from Comcast and we configured everything to our liking. We've never really had very many issues with the router aside from having to restart it every once in a while. Here's the problem... About three months ago I changed the password on the router from the default. After that, I logged into the router several times to make changes with no issue. During May I logged into the router to add two new static routes, no problems. A week ago, I tried to log into the router and could not. I tried the non-default password that I changed it to, the default, anything and everything I could think of and no luck. I restarted the router on Monday thinking it may just be locked up, but after the restart it would still not let me log in. This router is at our other office about 2 hours from here and I want to avoid having to drive down there and reset to factory defaults, reconfigure, etc. Any ideas?

    Read the article

  • windows: force user to use specific network adapter

    - by Chad
    I'm looking for a configuration/hack to force a particular application or all traffic from a particular user to use a specific NIC. I have an legacy client/server app that has a "security feature" that limits connections based on IP address. I'm trying to find a way to migrate this app to a terminal server environment. The simple solution is for the development team to update the code in the application, however in this case that's not an option. I was thinking I might be able to install VMware NIC's installed for each user on the terminal server and do some type of scripting to force that user account to use a specific NIC. Anybody have any ideas on this? EDIT 1: I think I have a hack to work around my specific problem, however I'd love to hear of a more elegant solution. I got lucky in that the software reads the server IP address out of a config file. So I'm going to have to make a config file for each user and make a customer programs files for each user. Then add a VMware NIC for each user and make each server IP address reside on a different subnet. That will force the traffic for a particular user to a particular IP address, however its really messy and all the VM NIC's will slow down the terminal server. I'll setup a proof of concept Monday and let the group know how it affects performance.

    Read the article

  • Grow Your Business with Security

    - by Darin Pendergraft
    Author: Kevin Moulton Kevin Moulton has been in the security space for more than 25 years, and with Oracle for 7 years. He manages the East EnterpriseSecurity Sales Consulting Team. He is also a Distinguished Toastmaster. Follow Kevin on Twitter at twitter.com/kevin_moulton, where he sometimes tweets about security, but might also tweet about running, beer, food, baseball, football, good books, or whatever else grabs his attention. Kevin will be a regular contributor to this blog so stay tuned for more posts from him. It happened again! There I was, reading something interesting online, and realizing that a friend might find it interesting too. I clicked on the little email link, thinking that I could easily forward this to my friend, but no! Instead, a new screen popped up where I was asked to create an account. I was expected to create a User ID and password, not to mention providing some personally identifiable information, just for the privilege of helping that website spread their word. Of course, I didn’t want to have to remember a new account and password, I didn’t want to provide the requisite information, and I didn’t want to waste my time. I gave up, closed the web page, and moved on to something else. I was left with a bad taste in my mouth, and my friend might never find her way to this interesting website. If you were this content provider, would this be the outcome you were looking for? A few days later, I had a similar experience, but this one went a little differently. I was surfing the web, when I happened upon some little chotcke that I just had to have. I added it to my cart. When I went to buy the item, I was again brought to a page to create account. Groan! But wait! On this page, I also had the option to sign in with my OpenID account, my Facebook account, my Yahoo account, or my Google Account. I have all of those! No new account to create, no new password to remember, and no personally identifiable information to be given to someone else (I’ve already given it all to those other guys, after all). In this case, the vendor was easy to deal with, and I happily completed the transaction. That pleasant experience will bring me back again. This is where security can grow your business. It’s a differentiator. You’ve got to have a presence on the web, and that presence has to take into account all the smart phones everyone’s carrying, and the tablets that took over cyber Monday this year. If you are a company that a customer can deal with securely, and do so easily, then you are a company customers will come back to again and again. I recently had a need to open a new bank account. Every bank has a web presence now, but they are certainly not all the same. I wanted one that I could deal with easily using my laptop, but I also wanted 2-factor authentication in case I had to login from a shared machine, and I wanted an app for my iPad. I found a bank with all three, and that’s who I am doing business with. Let’s say, for example, that I’m in a regular Texas Hold-em game on Friday nights, so I move a couple of hundred bucks from checking to savings on Friday afternoons. I move a similar amount each week and I do it from the same machine. The bank trusts me, and they trust my machine. Most importantly, they trust my behavior. This is adaptive authentication. There should be no reason for my bank to make this transaction difficult for me. Now let's say that I login from a Starbucks in Uzbekistan, and I transfer $2,500. What should my bank do now? Should they stop the transaction? Should they call my home number? (My former bank did exactly this once when I was taking money out of an ATM on a business trip, when I had provided my cell phone number as my primary contact. When I asked them why they called my home number rather than my cell, they told me that their “policy” is to call the home number. If I'm on the road, what exactly is the use of trying to reach me at home to verify my transaction?) But, back to Uzbekistan… Should my bank assume that I am happily at home in New Jersey, and someone is trying to hack into my account? Perhaps they think they are protecting me, but I wouldn’t be very happy if I happened to be traveling on business in Central Asia. What if my bank were to automatically analyze my behavior and calculate a risk score? Clearly, this scenario would be outside of my typical behavior, so my risk score would necessitate something more than a simple login and password. Perhaps, in this case, a one-time password to my cell phone would prove that this is not just some hacker half way around the world. But, what if you're not a bank? Do you need this level of security? If you want to be a business that is easy to deal with while also protecting your customers, then of course you do. You want your customers to trust you, but you also want them to enjoy doing business with you. Make it easy for them to do business with you, and they’ll come back, and perhaps even Tweet about it, or Like you, and then their friends will follow. How can Oracle help? Oracle has the technology and expertise to help you to grown your business with security. Oracle Adaptive Access Manager will help you to prevent fraud while making it easier for your customers to do business with you by providing the risk analysis I discussed above, step-up authentication, and much more. Oracle Mobile and Social Access Service will help you to secure mobile access to applications by expanding on your existing back-end identity management infrastructure, and allowing your customers to transact business with you using the social media accounts they already know. You also have device fingerprinting and metrics to help you to grow your business securely. Security is not just a cost anymore. It’s a way to set your business apart. With Oracle’s help, you can be the business that everyone’s tweeting about. Image courtesy of Flickr user shareski

    Read the article

  • Communities - The importance of exchange and discussion

    Communication with your environment is an essential part of everyone's life. And it doesn't matter whether you are actually living in a rural area in the middle of nowhere, within the pulsating heart of a big city, or in my case on a wonderful island in the Indian Ocean. The ability to exchange your thoughts, your experience and your worries with another person helps you to get different points of view and new ideas on how to resolve an issue you might be confronted with. Benefits of community work What happens to be common sense in your daily life, also applies to your work environment. Working in IT, or ICT as it is called in Mauritius, requires a lot of reading and learning. Not only during your lectures at the university but with your colleagues in a project assignment and hopefully with 'unknown' pals in the universe of online communities. At least I can say that I learned quite a lot from other developers code, their responses in various forums, their numerous blog articles, and while attending local user group meetings. When I started to work as a professional software developer (or engineer some may say) years ago I immediately checked the existence of communities on the programming language, the database technology and other vital information on software development in general. Luckily, it wasn't too difficult to find. My employer had a subscription of the monthly magazines and newsletters of a national organisation which also run the biggest forum in that area. Getting in touch with other developers and reading their common problems but also solutions was a huge benefit to my growth. Image courtesy of Michael Kappel (CC BY-NC 2.0) Active participation and regular contribution to this community gave me some nice advantages, too. Within three years I was listed as a conference speaker at the annual developer's conference and provided several sessions on different topics during consecutive years. Back in 2004, I took over the responsibility and management of the monthly meetings of a regional user group, and organised it for more than two years. Furthermore, I was invited to the newly-founded community program of Microsoft Germany (Community Leader/Insider Program - CLIP). My website on Active FoxPro Pages was nominated in the second batch of online communities. Due to my community work and providing advice to others, I had the honour to be awarded as Microsoft Most Valuable Professional (MVP) - Visual Developer for Visual FoxPro in the years 2006 and 2007. It was a great experience to meet with other like-minded people and I'm really grateful for that. Just in case, more details are listed in my Curriculum Vitae. But this all changed when I moved to Mauritius... Cyber island Mauritius? During the first months in Mauritius I was way too busy to think about community activities at all. First of all, there was the new company that had to be set up, the new staff had to be trained and of course the communication work-flows and so on with the project managers back in Germany had to be sorted out, too. Second, I had to get a grip of my private matters like getting the basics for my new household or exploring the neighbourhood, and last but not least I needed a break from the hectic and intensive work prior to my departure. As soon as the sea literally calmed down, I started to have conversations with my colleagues about communities and user groups. Sadly, it turned out that there were none, or at least no one was aware of any at that time. Oh oh, what did I do? Anyway, having this kind of background and very positive experience with off-line and on-line activities I decided for myself that some day I'm going to found a community in Mauritius for all kind of IT/ICT-related fields. The main focus might be on software development but not on a certain technology or methodology. It was clear to me that it should be an open infrastructure and anyone is welcome to join, to experience, to share and to contribute if they would like to. That was the idea at that time... Ok, fast-forward to recent events. At the end of October 2012 I was invited to an event called Open Days organised by Microsoft Indian Ocean Islands together with other local partners and resellers. There I got in touch with local Technical Evangelist Arnaud Meslier and we had a good conversation on communities during the breaks. Eventually, I left a good impression on him, as we are having chats on Facebook or Skype irregularly. Well, seeing that my personal and professional surroundings have been settled and running smooth, having that great exchange and contact with Microsoft IOI (again), and being really eager to re-animate my intentions from 2007, I recently founded a new community: Mauritius Software Craftsmanship Community - #MSCC It took me a while to settle down with the name but it was obvious that the community should not be attached to one single technology, like ie. .NET user group, Oracle developers, or Joomla friends (these are fictitious names). There are several other reasons why I came up with 'Craftsmanship' as the core topic of this community. The expression of 'engineering' didn't feel right with the fields covered. Software development in all kind of facets is a craft, and therefore demands a lot of practice but also guidance from more experienced developers. It also includes the process of designing, modelling and drafting the ideas. Has to deal with various types of tests and test methodologies, and of course should be focused on flexible and agile ways of acting. In order to meet and to excel a customer's request for a solution. Next, I was looking for an easy way to handle the organisation of events and meeting appointments. Using all kind of social media platforms like Google+, LinkedIn, Facebook, Xing, etc. I was never really confident about their features of event handling. More by chance I stumbled upon Meetup.com and in combination with the other entities (G+ Communities, FB Pages or in Groups) I am looking forward to advertise and manage all future activities here: Mauritius Software Craftsmanship Community This is a community for those who care and are proud of what they do. For those developers, regardless how experienced they are, who want to improve and master their craft. This is a community for those who believe that being average is just not good enough. I know, there are not many 'craftsmen' yet but it's a start... Let's see how it looks like by the end of the year. There are free smartphone apps for Android and iOS from Meetup.com that allow you to keep track of meetings and to stay informed on latest updates. And last but not least, there will be a Trello workspace to collect and share ideas and provide downloads of slides, etc. Sharing is caring! As mentioned, the #MSCC is present in various social media networks in order to cover as many people as possible here in Mauritius. Following is an overview of the current networks: Twitter - Latest updates and quickies Google+ - Community channel Facebook - Community Page LinkedIn - Community Group Trello - Collaboration workspace to share and develop ideas Hopefully, this covers the majority of computer-related people in Mauritius. Please spread the word about the #MSCC between your colleagues, your friends and other interested 'geeks'. Your future looks bright Running and participating in a user group or any kind of community usually provides quite a number of advantages for anyone. On the one side it is very joyful for me to organise appointments and get in touch with people that might be interested to present a little demo of their projects or their recent problems they had to tackle down, and on the other side there are lots of companies that have various support programs or sponsorships especially tailored for user groups. At the moment, I already have a couple of gimmicks that I would like to hand out in small contests or raffles during one of the upcoming meetings, and as said, companies provide all kind of goodies, books free of charge, or sometimes even licenses for communities. Meeting other software developers or IT guys also opens up your point of view on the local market and there might be interesting projects or job offers available, too. A community like the Mauritius Software Craftsmanship Community is great for freelancers, self-employed, students and of course employees. Meetings will be organised on a regular basis, and I'm open to all kind of suggestions from you. Please leave a comment here in blog or join the conversations in the above mentioned social networks. Let's get this community up and running, my fellow Mauritians!

    Read the article

  • SSAS processing error: Client unable to establish connection; 08001; Encryption not supported on the client.; 08001

    - by Kevin Shyr
    After getting the cube to successfully deploy and process on Friday, I was baffled on Monday that the newly added dimension caused the cube processing to break.  I then followed the first instinct, discarded all my changes to reverted back to the version on Friday, and had no luck.  The error message (attached below) did not help as I was looking for some kind of SQL service error.  After examining the windows server log and the SQL server log, I just couldn't see anything wrong with it.After swearing for some time, and with the help of going off and working on something else for a while.  I came back to the solution and looked at the data source.  Even though I know I have never changed the provider (the default setup gave me SQL native client), I decided to change it and give OLE DB a try.This simple change allows my cube to process successfully again.  While I don't understand why the same settings that worked last week doesn't work this week, I don't have all the information to say with certainty that nothing has changed in the environment (firewall changes, server updates, etc.).SSAS processing error:<Batch >  <Parallel>    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">      <Object>        <DatabaseID>DWH Sales Facts</DatabaseID>        <CubeID>DWH Sales Facts</CubeID>      </Object>      <Type>ProcessFull</Type>      <WriteBackTableCreation>UseExisting</WriteBackTableCreation>    </Process>  </Parallel></Batch>                Processing Dimension 'Date' completed.                                Errors and Warnings from Response                OLE DB error: OLE DB or ODBC error: A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.; 08001; Client unable to establish connection; 08001; Encryption not supported on the client.; 08001.                Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'DWH Sales Facts', Name of 'DWH Sales Facts'.                Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Currency', Name of 'Currency' was being processed.                Errors in the OLAP storage engine: An error occurred while the 'Currency Dim ID' attribute of the 'Currency' dimension from the 'DWH Sales Facts' database was being processed.                Internal error: The operation terminated unsuccessfully.                Server: The operation has been cancelled.

    Read the article

  • Google TV Gets Bad Reception. Can Media Center Pull in the Signal?

    - by andrewbrust
    The news hit Monday morning that Google has decided to delay the release of its Google TV platform, and has asked its OEMs to delay any products that embed the software.  Coming just about two weeks prior to the 2011 Consumer Electronics Show (CES), Google’s timing is about the worst imaginable.  CES is where the platform should have had its coming out party, especially given all the anticipation that has built up since its initial announcement came 7 months ago. At last year’s CES, it seemed every consumer electronics company had fashioned its own software stack for Internet-based video programming and applications/widgets on its TVs, optical disc players and set top boxes.  In one case, I even saw two platforms on a single TV set (one provided by Yahoo and the other one native to the TV set). The whole point of Google TV was to solve this problem and offer a standard, embeddable platform.  But that won’t be happening, at least not for a while.  Google seems unable to get it together, and more proprietary approaches, like Apple TV, don’t seem to be setting the world of TV-Internet convergence on fire, either. It seems to me, that when it comes to building a “TV operating system,” Windows Media Center is still the best of a bad bunch.  But it won’t stay so for much longer without some changes.  Will Redmond pick up the ball that Google has fumbled?  I’m skeptical, but hopeful.  Regardless, here are some steps that could help Microsoft make the most of Google’s faux pas: Introduce a new Media Center version that uses XBox 360, rather than Windows 7 (or 8), as the platform.  TV platforms should be appliance-like, not PC-like.  Combine that notion with the runaway sales numbers for Xbox 360 Kinect, and the mass appeal it has delivered for Xbox, and the switch form Windows makes even more sense. As I have pointed out before, Microsoft’s Xbox implementation of its Mediaroom platform (announced and demoed at last year’s CES) gets Redmond 80% of the way toward this goal.  Nothing stops Microsoft from going the other 20%, other than its own apathy, which I hope has dissipated. Reverse the decision to remove Drive Extender technology from Windows Home Server (WHS), and create deep integration between WHS and Media Center.  I have suggested this previously as well, but the recent announcement that Drive Extender would be dropped from WHS 2.0 creates the need for me to a) join the chorus of people urging Microsoft to reconsider and b) reiterate the importance of Media Center-WHS integration in the context of a Google compete scenario. Enable Windows Phone 7 (WP7) as a Media Center client.  This would tighten the integration loop already established between WP7, Xbox and Zune.  But it would also counter Echostar/DISH Network/Sling Media, strike a blow against Google/Android (and even Apple/iOS) and could be the final strike against TiVO. Bring the WP7 user interface to Media Center and Kinect-enable it.  This would further the integration discussed above and would be appropriate recognition of WP7’s Metro UI having been built on the heritage of the original Media Center itself.  And being able to run your DVR even if you can’t find the remote (or can’t see its buttons in the dark) could be a nifty gimmick. Microsoft can do this but its consumer-oriented organization, responsible for Xbox, Zune and WP7, has to take the reins here, or none of this will likely work.  There’s a significant chance that won’t happen, but I won’t let that stop me from hoping that it does and insisting that it must.  Honestly, this fight is Microsoft’s to lose.

    Read the article

  • They may block off Howard Street—but Oracle OpenWorld is a two-way street.

    - by Oracle Accelerate for Midsize Companies
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 by Jim Lein, Sr. Director, Oracle Accelerate for Midsize Companies “Engineered to Inform and Inspire”—that’s the theme of Oracle OpenWorld 2012. In early October, tens of thousands of attendees will descend on the streets of San Francisco because they share one thing in common: the desire to learn more about Oracle. You might think that’s the way we, Oracle employees, look at this event—as just another opportunity for attendees to learn about what we do. But it’s really a two way street. Every year I’m amazed by how informed and inspired I am by our customers and their companies. Midsize companies buy Oracle to grow. As part of the Oracle Accelerate for Midsize Companies team I get to talk with our partners and business leaders at growing companies almost every day, usually via phone. Oracle OpenWorld presents the perfect opportunity to meet some of them in person, in an informal setting, and in one of the most beautiful cities in the world. The stories our customers tell me about their businesses provide vivid examples of how they have overcome the challenges of managing increasingly complex global operations and growing during uncertain economic conditions. It’s no secret that my favorite session at Oracle OpenWorld (besides Larry Ellison’s keynotes and the Customer Appreciation Event, of course) is the Oracle Accelerate Customer Panel. This year we’re featuring executives from three companies who deployed Oracle ERP rapidly to support their company’s growth: Chris Powell, VP and Corporate Controller of Beats by Dr. Dre, a California based designer and manufacturer of premium headphones (sorry, no free samples), Iñaki Zuazo, CIO of Industrias Juno, a building materials provider based in Spain, Kamran Moosa, Project Coordinator for Spartan Engineering, a provider of engineering and construction support services for an LPG storage project in Texas, and That’s a pretty diverse lineup and it will be interesting to hear the perspectives of both IT and financial project stakeholders. The session, “Oracle Accelerate Customer Case Studies: Rapid Deployment of Oracle Applications”, is at 3:30 pm on Wednesday, October 3, in the Concert room at the Palace Hotel. Oracle loves our hometown of San Francisco and it’s a great place to host Oracle OpenWorld. It’s now San Francisco’s largest conference and the city closes off Howard Street to better accommodate the attendees. Some Bay Area commuters may be inconvenienced for a few days by this closure but the conference brings about $100 million into the local economy. Now that’s a two-way street. More Oracle Accelerate at Oracle OpenWorld “Faster, Better, Cheaper Application Deployment with Oracle Business Accelerators”, Monday, October 1st, 10:45 a.m., Moscone West Room 3016 “Oracle Accelerate and Oracle Business Accelerators for Midsize Companies”, (partners only), Wednesday, October 3, 10:15 a.m., Marriott – Golden Gate B Visit the Oracle Accelerate and Oracle Business Accelerator Kiosk in the Moscone West Exhibit Grounds Download the Focus On Oracle Accelerate for Midsize Companies Focus document /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >