Search Results

Search found 4349 results on 174 pages for 'remember me'.

Page 163/174 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • Internet Explorer Automation: how to suppress Open/Save dialog?

    - by Vladimir Dyuzhev
    When controlling IE instance via MSHTML, how to suppress Open/Save dialogs for non-HTML content? I need to get data from another system and import it into our one. Due to budget constraints no development (e.g. WS) can be done on the other side for some time, so my only option for now is to do web scrapping. The remote site is ASP.NET-based, so simple HTML requests won't work -- too much JS. I wrote a simple C# application that uses MSHTML and SHDocView to control an IE instance. So far so good: I can perform login, navigate to desired page, populate required fields and do submit. Then I face a couple of problems: First is that report is opening in another window. I suspect I can attach to that window too by enumerating IE windows in the system. Second, more troublesome, is that report itself is CSV file, and triggers Open/Save dialog. I'd like to avoid it and make IE save the file into given location OR I'm fine with programmatically clicking dialog buttons too (how?) I'm actually totally non-Windows guy (unix/J2EE), and hope someone with better knowledge would give me a hint how to do those tasks. Thanks! UPDATE I've found a promising document on MSDN: http://msdn.microsoft.com/en-ca/library/aa770041.aspx Control the kinds of content that are downloaded and what the WebBrowser Control does with them once they are downloaded. For example, you can prevent videos from playing, script from running, or new windows from opening when users click on links, or prevent Microsoft ActiveX controls from downloading or executing. Slowly reading through... UPDATE 2: MADE IT WORK, SORT OF... Finally I made it work, but in an ugly way. Essentially, I register a handler "before navigate", then, in the handler, if the URL is matching my target file, I cancel the navigation, but remember the URL, and use WebClient class to access and download that temporal URL directly. I cannot copy the whole code here, it contains a lot of garbage, but here are the essential parts: Installing handler: _IE2.FileDownload += new DWebBrowserEvents2_FileDownloadEventHandler(IE2_FileDownload); _IE.BeforeNavigate2 += new DWebBrowserEvents2_BeforeNavigate2EventHandler(IE_OnBeforeNavigate2); Recording URL and then cancelling download (thus preventing Save dialog to appear): public string downloadUrl; void IE_OnBeforeNavigate2(Object ob1, ref Object URL, ref Object Flags, ref Object Name, ref Object da, ref Object Head, ref bool Cancel) { Console.WriteLine("Before Navigate2 "+URL); if (URL.ToString().EndsWith(".csv")) { Console.WriteLine("CSV file"); downloadUrl = URL.ToString(); } Cancel = false; } void IE2_FileDownload(bool activeDocument, ref bool cancel) { Console.WriteLine("FileDownload, downloading "+downloadUrl+" instead"); cancel = true; } void IE_OnNewWindow2(ref Object o, ref bool cancel) { Console.WriteLine("OnNewWindow2"); _IE2 = new SHDocVw.InternetExplorer(); _IE2.BeforeNavigate2 += new DWebBrowserEvents2_BeforeNavigate2EventHandler(IE_OnBeforeNavigate2); _IE2.Visible = true; o = _IE2; _IE2.FileDownload += new DWebBrowserEvents2_FileDownloadEventHandler(IE2_FileDownload); _IE2.Silent = true; cancel = false; return; } And in the calling code using the found URL for direct download: ... driver.ClickButton(".*_btnRunReport"); driver.WaitForComplete(); Thread.Sleep(10000); WebClient Client = new WebClient(); Client.DownloadFile(driver.downloadUrl, "C:\\affinity.dump"); (driver is a simple wrapper over IE instance = _IE) Hope that helps someone.

    Read the article

  • [C#][XNA] Draw() 20,000 32 by 32 Textures or 1 Large Texture 20,000 Times

    - by Rudi
    The title may be confusing - sorry about that, it's a poor summary. Here's my dilemma. I'm programming in C# using the .NET Framework 4, and aiming to make a tile-based game with XNA. I have one large texture (256 pixels by 4096 pixels). Remember this is a tile-based game, so this texture is so massive only because it contains many tiles, which are each 32 pixels by 32 pixels. I think the experts will definitely know what a tile-based game is like. The orientation is orthogonal (like a chess board), not isometric. In the Game.Draw() method, I have two choices, one of which will be incredibly more efficient than the other. Choice/Method #1: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { SpriteBatch.Draw( MyLargeTexture, // One large 256 x 4096 texture new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(x, y, 32, 32), // Notice the source rectangle 'cuts out' 32 by 32 squares from the texture corresponding to the loop Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the first method is referencing one large texture many many times, each time using a small rectangle of this large texture to draw the appropriate tile image. Choice/Method #2: Semi-Pseudocode: public void Draw() { // map tiles are drawn left-to-right, top-to-bottom for (int x = 0; x < mapWidth; x++) { for (int y = 0; y < mapHeight; y++) { Texture2D tileTexture = map.GetTileTexture(x, y); // Getting a small 32 by 32 texture (different each iteration of the loop) SpriteBatch.Draw( tileTexture, new Rectangle(x, y, 32, 32), // Destination rectangle - ignore this, its ok new Rectangle(0, 0, tileTexture.Width, tileTexture.Height), // Notice the source rectangle uses the entire texture, because the entire texture IS 32 by 32 Color.White); // No tint - ignore this, its ok } } } Caption: So, effectively, the second method is drawing many small textures many times. The Question: Which method and why? Personally, I would think it would be incredibly more efficient to use the first method. If you think about what that means for the tile array in a map (think of a large map with 2000 by 2000 tiles, let's say), each Tile object would only have to contain 2 integers, for the X and Y positions of the source rectangle in the one large texture - 8 bytes. If you use method #2, however, each Tile object in the tile array of the map would have to store a 32by32 Texture - an image - which has to allocate memory for the R G B A pixels 32 by 32 times - is that 4096 bytes per tile then? So, which method and why? First priority is speed, then memory-load, then efficiency or whatever you experts believe.

    Read the article

  • Strange problem with Google App Engine Java Mail

    - by Velu
    Hi, I'm using the MailService feature of Google App Engine in my application. It works fine in one application without any issues. But the same code doesn't work in another app. I'm not able to figure it out. Please help. Following is the piece of code that I use to send mail. public static void sendHTMLEmail(String from, String fromName, String to, String toName, String subject, String body) { _logger.info("entering ..."); Properties props = new Properties(); Session session = Session.getDefaultInstance(props, null); _logger.info("got mail session ..."); String htmlBody = body; try { Message msg = new MimeMessage(session); _logger.info("created mimemessage ..."); msg.setFrom(new InternetAddress(from, fromName)); _logger.info("from is set ..."); msg.addRecipient(Message.RecipientType.TO, new InternetAddress( to, toName)); _logger.info("recipient is set ..."); msg.setSubject(subject); _logger.info("subject is set ..."); Multipart mp = new MimeMultipart(); MimeBodyPart htmlPart = new MimeBodyPart(); htmlPart.setContent(htmlBody, "text/html"); mp.addBodyPart(htmlPart); _logger.info("body part added ..."); msg.setContent(mp); _logger.info("content is set ..."); Transport.send(msg); _logger.info("email sent successfully."); } catch (AddressException e) { e.printStackTrace(); } catch (MessagingException e) { e.printStackTrace(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); System.err.println(e.getMessage()); } } When I look at the log (on the server admin console), it prints the statement "content is set ..." and after that there is nothing in the log. The mail is not sent. At times I get the following error after the above statement is printed (and the mail is not sent). com.google.appengine.repackaged.com.google.common.base.internal.Finalizer getInheritableThreadLocalsField: Couldn't access Thread.inheritableThreadLocals. Reference finalizer threads will inherit thread local values. But the mail quota usage keeps increasing. Remember, this works fine in one application, but not in other. I'm using the same set of email addresses in both the apps (for from and to). I'm really stuck with this. Appreciate any help. Thank you. Velu

    Read the article

  • MySQL and INT auto_increment fields

    - by PHPguy
    Hello folks, I'm developing in LAMP (Linux+Apache+MySQL+PHP) since I remember myself. But one question was bugging me for years now. I hope you can help me to find an answer and point me into the right direction. Here is my challenge: Say, we are creating a community website, where we allow our users to register. The MySQL table where we store all users would look then like this: CREATE TABLE `users` ( `uid` int(2) unsigned NOT NULL auto_increment COMMENT 'User ID', `name` varchar(20) NOT NULL, `password` varchar(32) NOT NULL COMMENT 'Password is saved as a 32-bytes hash, never in plain text', `email` varchar(64) NOT NULL, `created` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of registration', `updated` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of profile update, e.g. change of email', PRIMARY KEY (`uid`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; So, from this snippet you can see that we have a unique and automatically incrementing for every new user 'uid' field. As on every good and loyal community website we need to provide users with possibility to completely delete their profile if they want to cancel their participation in our community. Here comes my problem. Let's say we have 3 registered users: Alice (uid = 1), Bob (uid = 2) and Chris (uid = 3). Now Bob want to delete his profile and stop using our community. If we delete Bob's profile from the 'users' table then his missing 'uid' will create a gap which will be never filled again. In my opinion it's a huge waste of uid's. I see 3 possible solutions here: 1) Increase the capacity of the 'uid' field in our table from SMALLINT (int(2)) to, for example, BIGINT (int(8)) and ignore the fact that some of the uid's will be wasted. 2) introduce the new field 'is_deleted', which will be used to mark deleted profiles (but keep them in the table, instead of deleting them) to re-utilize their uid's for newly registered users. The table will look then like this: CREATE TABLE `users` ( `uid` int(2) unsigned NOT NULL auto_increment COMMENT 'User ID', `name` varchar(20) NOT NULL, `password` varchar(32) NOT NULL COMMENT 'Password is saved as a 32-bytes hash, never in plain text', `email` varchar(64) NOT NULL, `is_deleted` int(1) unsigned NOT NULL default '0' COMMENT 'If equal to "1" then the profile has been deleted and will be re-used for new registrations', `created` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of registration', `updated` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of profile update, e.g. change of email', PRIMARY KEY (`uid`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; 3) Write a script to shift all following user records once a previous record has been deleted. E.g. in our case when Bob (uid = 2) decides to remove his profile, we would replace his record with the record of Chris (uid = 3), so that uid of Chris becomes qual to 2 and mark (is_deleted = '1') the old record of Chris as vacant for the new users. In this case we keep the chronological order of uid's according to the registration time, so that the older users have lower uid's. Please, advice me now which way is the right way to handle the gaps in the auto_increment fields. This is just one example with users, but such cases occur very often in my programming experience. Thanks in advance!

    Read the article

  • Give Markup support for Custom server control with public PlaceHolders properties

    - by ravinsp
    I have a custom server control with two public PlaceHolder properties exposed to outside. I can use this control in a page like this: <cc1:MyControl ID="MyControl1" runat="server"> <TitleTemplate> Title text and anything else </TitleTemplate> <ContentTemplate> <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox> <asp:Button ID="Button1" runat="server" Text="Button" /> </ContentTemplate> </cc1:MyControl> TitleTemplate and ContentTemplate are properties of type asp.net PlaceHolder class. Everything works fine. The control gets any content given to these custom properties and produces a custom HTML output around them. If I want a Button1_Click event handler, I can attach the event handler in Page_Load like the following code. And it works. protected void Page_Load(object sender, EventArgs e) { Button1.Click += new EventHandler(Button1_Click); } void Button1_Click(object sender, EventArgs e) { TextBox1.Text = "Button1 clicked"; } But if try to attach the click event handler in aspx markup I get an error when running the application "Compiler Error Message: CS0117: 'ASP.default_aspx' does not contain a definition for 'Button1_Click' <asp:Button ID="Button1" runat="server" Text="Button" OnClick="Button1_Click" /> AutoEventWireup is set to "true" in the page markup. This happens only for child controls inside my custom control. I can programatically access child control correctly. Only problem is with event handler assignment from Markup. When I select the child Button in markup, the properties window only detects it as a < BUTTON. Not System.Web.UI.Controls.Button. It also doesn't display the "Events" tab. How can I give markup support for this scenario? Here's code for MyControl class if needed. And remember, I'm not using any ITemplate types for this. The custom properties I provide are of type "PlaceHolder". [ToolboxData("<{0}:MyControl runat=server>" + "<TitleTemplate></TitleTemplate>" + "<ContentTemplate></ContentTemplate>" + "</{0}:MyControl>")] public class MyControl : WebControl { PlaceHolder contentTemplate, titleTemplate; public MyControl() { contentTemplate = new PlaceHolder(); titleTemplate = new PlaceHolder(); Controls.Add(contentTemplate); Controls.Add(titleTemplate); } [Browsable(true)] [TemplateContainer(typeof(PlaceHolder))] [PersistenceMode(PersistenceMode.InnerProperty)] public PlaceHolder TitleTemplate { get { return titleTemplate; } } [Browsable(true)] [TemplateContainer(typeof(PlaceHolder))] [PersistenceMode(PersistenceMode.InnerProperty)] public PlaceHolder ContentTemplate { get { return contentTemplate; } } protected override void RenderContents(HtmlTextWriter output) { output.Write("<div>"); output.Write("<div class=\"title\">"); titleTemplate.RenderControl(output); output.Write("</div>"); output.Write("<div class=\"content\">"); contentTemplate.RenderControl(output); output.Write("</div>"); output.Write("</div>"); } }

    Read the article

  • Help me upgrade my pf.conf for OpenBSD 4.7

    - by polemon
    I'm planning on upgrading my OpenBSD to 4.7 (from 4.6) and as you may or may not know, they changed the syntax for pf.conf. This is the relevant portion from the upgrade guide: pf(4) NAT syntax change As described in more detail in this mailing list post, PF's separate nat/rdr/binat (translation) rules have been replaced with actions on regular match/filter rules. Simple rulesets may be converted like this: nat on $ext_if from 10/8 -> ($ext_if) rdr on $ext_if to ($ext_if) -> 1.2.3.4 becomes match out on $ext_if from 10/8 nat-to ($ext_if) match in on $ext_if to ($ext_if) rdr-to 1.2.3.4 and... binat on $ext_if from $web_serv_int to any -> $web_serv_ext becomes match on $ext_if from $web_serv_int to any binat-to $web_serv_ext nat-anchor and/or rdr-anchor lines, e.g. for relayd(8), ftp-proxy(8) and tftp-proxy(8), are no longer used and should be removed from pf.conf(5), leaving only the anchor lines. Translation rules relating to these and spamd(8) will need to be adjusted as appropriate. N.B.: Previously, translation rules had "stop at first match" behaviour, with binat being evaluated first, followed by nat/rdr depending on direction of the packet. Now the filter rules are subject to the usual "last match" behaviour, so care must be taken with rule ordering when converting. pf(4) route-to/reply-to syntax change The route-to, reply-to, dup-to and fastroute options in pf.conf move to filteropts; pass in on $ext_if route-to (em1 192.168.1.1) from 10.1.1.1 pass in on $ext_if reply-to (em1 192.168.1.1) to 10.1.1.1 becomes pass in on $ext_if from 10.1.1.1 route-to (em1 192.168.1.1) pass in on $ext_if to 10.1.1.1 reply-to (em1 192.168.1.1) Now, this is my current pf.conf: # $OpenBSD: pf.conf,v 1.38 2009/02/23 01:18:36 deraadt Exp $ # # See pf.conf(5) for syntax and examples; this sample ruleset uses # require-order to permit mixing of NAT/RDR and filter rules. # Remember to set net.inet.ip.forwarding=1 and/or net.inet6.ip6.forwarding=1 # in /etc/sysctl.conf if packets are to be forwarded between interfaces. ext_if="pppoe0" int_if="nfe0" int_net="192.168.0.0/24" polemon="192.168.0.10" poletopw="192.168.0.12" segatop="192.168.0.20" table <leechers> persist set loginterface $ext_if set skip on lo match on $ext_if all scrub (no-df max-mss 1440) altq on $ext_if priq bandwidth 950Kb queue {q_pri, q_hi, q_std, q_low} queue q_pri priority 15 queue q_hi priority 10 queue q_std priority 7 priq(default) queue q_low priority 0 nat-anchor "ftp-proxy/*" rdr-anchor "ftp-proxy/*" nat on $ext_if from !($ext_if) -> ($ext_if) rdr pass on $int_if proto tcp to port ftp -> 127.0.0.1 port 8021 rdr pass on $ext_if proto tcp to port 2080 -> $segatop port 80 rdr pass on $ext_if proto tcp to port 2022 -> $segatop port 22 rdr pass on $ext_if proto tcp to port 4000 -> $polemon port 4000 rdr pass on $ext_if proto tcp to port 6600 -> $polemon port 6600 anchor "ftp-proxy/*" block pass on $int_if queue(q_hi, q_pri) pass out on $ext_if queue(q_std, q_pri) pass out on $ext_if proto icmp queue q_pri pass out on $ext_if proto {tcp, udp} to any port ssh queue(q_hi, q_pri) pass out on $ext_if proto {tcp, udp} to any port http queue(q_std, q_pri) #pass out on $ext_if proto {tcp, udp} all queue(q_low, q_hi) pass out on $ext_if proto {tcp, udp} from <leechers> queue(q_low, q_std) pass in on $ext_if proto tcp to ($ext_if) port ident queue(q_hi, q_pri) pass in on $ext_if proto tcp to ($ext_if) port ssh queue(q_hi, q_pri) pass in on $ext_if proto tcp to ($ext_if) port http queue(q_hi, q_pri) pass in on $ext_if inet proto icmp all icmp-type echoreq queue q_pri If someone has experience with porting the 4.6 pf.conf to 4.7, please help me do the correct changes. OK, this is how far I've got: I commented out nat-anchor and rdr-anchor, as describted in the guide: #nat-anchor "ftp-proxy/*" #rdr-anchor "ftp-proxy/*" And this is how I've "converted" the rdr rules: #nat on $ext_if from !($ext_if) -> ($ext_if) match out on $ext_if from !($ext_if) nat-to ($ext_if) #rdr pass on $int_if proto tcp to port ftp -> 127.0.0.1 port 8021 match in on $int_if proto tcp to port ftp rdr-to 127.0.0.1 port 8021 #rdr pass on $ext_if proto tcp to port 2080 -> $segatop port 80 match in on $ext_if proto tcp tp port 2080 rdr-to $segatop port 80 #rdr pass on $ext_if proto tcp to port 2022 -> $segatop port 22 match in on $ext_if proto tcp tp port 2022 rdr-to $segatop port 22 rdr pass on $ext_if proto tcp to port 4000 -> $polemon port 4000 match in on $ext_if proto tcp tp port 4000 rdr-to $polemon port 4000 rdr pass on $ext_if proto tcp to port 6600 -> $polemon port 6600 match in on $ext_if proto tcp tp port 6600 rdr-to $polemon port 6600 Did I miss anything? Is the anchor for ftp-proxy OK as it is now? Do I need to change something in the other pass in on... lines?

    Read the article

  • I'm looking for a reliable way to verify T-SQL stored procedures. Anybody got one?

    - by Cory Larson
    Hi all-- We're upgrading from SQL Server 2005 to 2008. Almost every database in the 2005 instance is set to 2000 compatibility mode, but we're jumping to 2008. Our testing is complete, but what we've learned is that we need to get faster at it. I've discovered some stored procedures that either SELECT data from missing tables or try to ORDER BY columns that don't exist. Wrapping the SQL to create the procedures in SET PARSEONLY ON and trapping errors in a try/catch only catches the invalid columns in the ORDER BYs. It does not find the error with the procedure selecting data from the missing table. SSMS 2008's intellisense, however, DOES find the issue, but I can still go ahead and successfully run the ALTER script for the procedure without it complaining. So, why can I even get away with creating a procedure that fails when it runs? Are there any tools out there that can do better than what I've tried? The first tool I found wasn't very useful: DbValidator from CodeProject, but it finds fewer problems than this script I found on SqlServerCentral, which found the invalid column references. ------------------------------------------------------------------------- -- Check Syntax of Database Objects -- Copyrighted work. Free to use as a tool to check your own code or in -- any software not sold. All other uses require written permission. ------------------------------------------------------------------------- -- Turn on ParseOnly so that we don't actually execute anything. SET PARSEONLY ON GO -- Create a table to iterate through declare @ObjectList table (ID_NUM int NOT NULL IDENTITY (1, 1), OBJ_NAME varchar(255), OBJ_TYPE char(2)) -- Get a list of most of the scriptable objects in the DB. insert into @ObjectList (OBJ_NAME, OBJ_TYPE) SELECT name, type FROM sysobjects WHERE type in ('P', 'FN', 'IF', 'TF', 'TR', 'V') order by type, name -- Var to hold the SQL that we will be syntax checking declare @SQLToCheckSyntaxFor varchar(max) -- Var to hold the name of the object we are currently checking declare @ObjectName varchar(255) -- Var to hold the type of the object we are currently checking declare @ObjectType char(2) -- Var to indicate our current location in iterating through the list of objects declare @IDNum int -- Var to indicate the max number of objects we need to iterate through declare @MaxIDNum int -- Set the inital value and max value select @IDNum = Min(ID_NUM), @MaxIDNum = Max(ID_NUM) from @ObjectList -- Begin iteration while @IDNum <= @MaxIDNum begin -- Load per iteration values here select @ObjectName = OBJ_NAME, @ObjectType = OBJ_TYPE from @ObjectList where ID_NUM = @IDNum -- Get the text of the db Object (ie create script for the sproc) SELECT @SQLToCheckSyntaxFor = OBJECT_DEFINITION(OBJECT_ID(@ObjectName, @ObjectType)) begin try -- Run the create script (remember that PARSEONLY has been turned on) EXECUTE(@SQLToCheckSyntaxFor) end try begin catch -- See if the object name is the same in the script and the catalog (kind of a special error) if (ERROR_PROCEDURE() <> @ObjectName) begin print 'Error in ' + @ObjectName print ' The Name in the script is ' + ERROR_PROCEDURE()+ '. (They don''t match)' end -- If the error is just that this already exists then we don't want to report that. else if (ERROR_MESSAGE() <> 'There is already an object named ''' + ERROR_PROCEDURE() + ''' in the database.') begin -- Report the error that we got. print 'Error in ' + ERROR_PROCEDURE() print ' ERROR TEXT: ' + ERROR_MESSAGE() end end catch -- Setup to iterate to the next item in the table select @IDNum = case when Min(ID_NUM) is NULL then @IDNum + 1 else Min(ID_NUM) end from @ObjectList where ID_NUM > @IDNum end -- Turn the ParseOnly back off. SET PARSEONLY OFF GO Any suggestions?

    Read the article

  • how can I exit from a php script and continue right after the script?

    - by Samir Ghobril
    Hey guys, I have this piece of code, and when I add return after echo(if there is an error and I need to continue right after the script) I can't see the footer, do you know what the problem is? <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html lang="en" > <head> <title>Login | JM Today </title> <link href="Mainstyles.css" type="text/css" rel="stylesheet" /> </head> <body> <div class="container"> <?php include("header.php"); ?> <?php include("navbar.php"); ?> <?php include("cleanquery.php") ?> <div id="wrap"> <?php ini_set('display_errors', 'On'); error_reporting(E_ALL | E_STRICT); $conn=mysql_connect("localhost", "***", "***") or die(mysql_error()); mysql_select_db('jmtdy', $conn) or die(mysql_error()); if(isset($_POST['sublogin'])){ if(( strlen($_POST['user']) >0) && (strlen($_POST['pass']) >0)) { checklogin($_POST['user'], $_POST['pass']); } elseif((isset($_POST['user']) && empty($_POST['user'])) || (isset($_POST['pass']) && empty($_POST['pass']))){ echo '<p class="statusmsg">You didn\'t fill in the required fields.</p><br/><input type="button" value="Retry" onClick="location.href='."'login.php'\">"; return; } } else{ echo '<p class="statusmsg">You came here by mistake, didn\'t you?</p><br/><input type="button" value="Retry" onClick="location.href='."'login.php'\">"; return; } function checklogin($username, $password){ $username=mysql_real_escape_string($username); $password=mysql_real_escape_string($password); $result=mysql_query("select * from users where username = '$username'"); if($result != false){ $dbArray=mysql_fetch_array($result); $dbArray['password']=mysql_real_escape_string($dbArray['password']); $dbArray['username']=mysql_real_escape_string($dbArray['username']); if(($dbArray['password'] != $password ) || ($dbArray['username'] != $username)){ echo '<p class="statusmsg">The username or password you entered is incorrect. Please try again.</p><br/><input type="button" value="Retry" onClick="location.href='."'login.php'\">"; return; } $_SESSION['username']=$username; $_SESSION['password']=$password; if(isset($_POST['remember'])){ setcookie("jmuser",$_SESSION['username'],time()+60*60*24*356); setcookie("jmpass",$_SESSION['username'],time()+60*60*24*356); } } else{ echo'<p class="statusmsg"> The username or password you entered is incorrect. Please try again.</p><br/>input type="button" value="Retry" onClick="location.href='."'login.php'\">"; return; } } ?> </div> <br/> <br/> <?php include("footer.php") ?> </div> </body> </html>

    Read the article

  • What's wrong with this jQuery? It isn't working as intended

    - by Doug Smith
    Using cookies, I want it to remember the colour layout of the page. (So, if they set the gallery one color and the body background another color, it will save that on refresh. But it doesn't seem to be working. jQuery: $(document).ready(function() { if (verifier == 1) { $('body').css('background', $.cookie('test_cookie')); } if (verifier == 2) { $('#gallery').css('background', $.cookie('test_cookie')); } if (verifier == 3) { $('body').css('background', $.cookie('test_cookie')); $('#gallery').css('background', $.cookie('test_cookie')); } $('#set_cookie').click(function() { var color = $('#set_cookie').val(); $.cookie('test_cookie', color); }); $('#set_page').click(function() { $('body').css('background', $.cookie('test_cookie')); var verifier = 1; }); $('#set_gallery').click(function() { $('#gallery').css('background', $.cookie('test_cookie')); var verifier = 2; }); $('#set_both').click(function() { $('body').css('background', $.cookie('test_cookie')); $('#gallery').css('background', $.cookie('test_cookie')); var verifier = 3; }); }); HTML: <p>Please select a background color for either the page's background, the gallery's background, or both.</p> <select id="set_cookie"> <option value="#1d375a" selected="selected">Default</option> <option value="black">Black</option> <option value="blue">Blue</option> <option value="brown">Brown</option> <option value="darkblue">Dark Blue</option> <option value="darkgreen">Dark Green</option> <option value="darkred">Dark Red</option> <option value="fuchsia">Fuchsia</option> <option value="green">Green</option> <option value="grey">Grey</option> <option value="#d3d3d3">Light Grey</option> <option value="#32cd32">Lime Green</option> <option value="#f8b040">Macaroni</option> <option value="#ff7300">Orange</option> <option value="pink">Pink</option> <option value="purple">Purple</option> <option value="red">Red</option> <option value="#0fcce0">Turquoise</option> <option value="white">White</option> <option value="yellow">Yellow</option> </select> <input type="button" id="set_page" value="Page's Background" /><input type="button" id="set_gallery" value="Gallery's Background" /><input type="button" id="set_both" value="Both" /> </div> </div> </body> </html> Thanks so much for the help, I appreciate it. jsFiddle: http://jsfiddle.net/hL6Ye/

    Read the article

  • screnshot in android

    - by ujjawal
    The following is the code I am using to take a screen shot using GLSurfaceView. But I dont know why the onDraw() method in the GLSurfaceView.Renderer Class is not being called. Please if some one can look at the code below and point out what am I doing wrong.`public class MainActivity extends Activity { private GLSurfaceView mGLView; int x,y,w,h; Display disp; /** Called when the activity is first created. */ @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); // ToDo add your GUI initialization code here setContentView(R.layout.main); x=0; y=0; disp = getWindowManager().getDefaultDisplay(); w = disp.getWidth(); h = disp.getHeight(); mGLView = new ClearGLSurfaceView(this); } class ClearGLSurfaceView extends GLSurfaceView { public ClearGLSurfaceView(Context context) { super(context); setDebugFlags(DEBUG_CHECK_GL_ERROR | DEBUG_LOG_GL_CALLS); mRenderer = new ClearRenderer(); setRenderer(mRenderer); } ClearRenderer mRenderer; } class ClearRenderer implements GLSurfaceView.Renderer { public void onSurfaceCreated(GL10 gl, EGLConfig config) { // Do nothing special. } public void onSurfaceChanged(GL10 gl, int w, int h) { //gl.glViewport(0, 0, w, h); } public void onDrawFrame(GL10 gl) { //gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); int b[]=new int[w*(y+h)]; int bt[]=new int[w*h]; IntBuffer ib=IntBuffer.wrap(b); ib.position(0); gl.glReadPixels(x, 0, w, y+h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, ib); for(int i=0, k=0; i<h; i++, k++) {//remember, that OpenGL bitmap is incompatible with Android bitmap //and so, some correction need. for(int j=0; j<w; j++) { int pix=b[i*w+j]; int pb=(pix>>16)&0xff; int pr=(pix<<16)&0x00ff0000; int pix1=(pix&0xff00ff00) | pr | pb; bt[(h-k-1)*w+j]=pix1; } } Bitmap bmp = Bitmap.createBitmap(bt, w, h,Bitmap.Config.ARGB_8888); try { File f = new File("/sdcard/testpicture.png"); f.createNewFile(); FileOutputStream fos=new FileOutputStream(f); bmp.compress(CompressFormat.PNG, 100, fos); try { fos.flush(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } try { fos.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch(IOException e) { e.printStackTrace(); } } } } ` Please someone help me out. I have just started learning to work on android.

    Read the article

  • Help to argue why to develop software on a physical computer rather than via a remote desktop

    - by s5804
    Remote desktops are great and many times a blessing and cost effective (instead of leasing expensive cables). I am not arguing against remote desktops, just if one have the alternative to use either remote desktop or physical computer, I would choose the later. Also note that I am not arguing for or against remote work practices. But in my case I am required to be physically present in the office when developing software. Background, I work in a company which main business is not to develop software. Therefore the company IT policies are mainly focused on security and to efficiently deploying/maintaing thousands of computer to users. Further, the typical employee runs typical Office applications, like a word processors. Because safety/stability is such a big priority, every non production system/application, shall be deployed into a physical different network, called the test network. Software development of course also belongs in the test network. To access the test network the company has created a standard policy, which dictates that access to the test network shall go only via a remote desktop client. Practically from ones production computer one would open up a remote desktop client to a virtual computer located in the test network. On the virtual computer's remote desktop one would be able to access/run/install all development tools, like Eclipse IDE. Another solution would be to have a dedicated physical computer, which is physically only connected to the test network. Both solutions are available in the company. I have tested both approaches and found running Eclipse IDE, SQL developer, in the remote desktop client to be sluggish (keyboard strokes are delayed), commands like alt-tab takes me out of the remote client, enjoying... Further, screen resolution and colors are different, just to mention a few. Therefore there is nothing technical wrong with the remote client, just not optimal and frankly de-motivating. Now with the new policies put in place, plans are to remove the physical computers connected to the test network. I am looking for help to argue for why software developers shall have a dedicated physical software development computer, to be productive and cost effective. Remember that we are physically in office. Further one can notice that we are talking about approx. 50 computers out of 2000 employees. Therefore the extra budget is relatively small. This is more about policy than cost. Please note that there are lots of similar setups in other companies that work great due to a perfectly tuned systems. However, in my case it is sluggish and it would cost more money to trouble shoot the performance and fine tune it rather than to have a few physical computers. As a business case we have argued that productivity will go down by 25%, however it's my feeling that the reality is probably closer to 50%. This business case isn't really accepted and I find it very difficult to defend it to managers that has never ever used a rich IDE in their life, never mind developed software. Further the test network and remote client has no guaranteed service level, therefore it is down for a few hours per month with the lowest priority on the fix list. Help is appreciated.

    Read the article

  • Is there a library available which easily can record and replay results of API calls?

    - by Billy ONeal
    I'm working on writing various things that call relatively complicated Win32 API functions. Here's an example: //Encapsulates calling NtQuerySystemInformation buffer management. WindowsApi::AutoArray NtDll::NtQuerySystemInformation( SystemInformationClass toGet ) const { AutoArray result; ULONG allocationSize = 1024; ULONG previousSize; NTSTATUS errorCheck; do { previousSize = allocationSize; result.Allocate(allocationSize); errorCheck = WinQuerySystemInformation(toGet, result.GetAs<void>(), allocationSize, &allocationSize); if (allocationSize <= previousSize) allocationSize = previousSize * 2; } while (errorCheck == 0xC0000004L); if (errorCheck != 0) { THROW_MANUAL_WINDOWS_ERROR(WinRtlNtStatusToDosError(errorCheck)); } return result; } //Client of the above. ProcessSnapshot::ProcessSnapshot() { using Dll::NtDll; NtDll ntdll; AutoArray systemInfoBuffer = ntdll.NtQuerySystemInformation( NtDll::SystemProcessInformation); BYTE * currentPtr = systemInfoBuffer.GetAs<BYTE>(); //Loop through the results, creating Process objects. SYSTEM_PROCESSES * asSysInfo; do { // Loop book keeping asSysInfo = reinterpret_cast<SYSTEM_PROCESSES *>(currentPtr); currentPtr += asSysInfo->NextEntryDelta; //Create the process for the current iteration and fill it with data. std::auto_ptr<ProcImpl> currentProc(ProcFactory( static_cast<unsigned __int32>(asSysInfo->ProcessId), this)); NormalProcess* nptr = dynamic_cast<NormalProcess*>(currentProc.get()); if (nptr) { nptr->SetProcessName(asSysInfo->ProcessName); } // Populate process threads for(ULONG idx = 0; idx < asSysInfo->ThreadCount; ++idx) { SYSTEM_THREADS& sysThread = asSysInfo->Threads[idx]; Thread thread( currentProc.get(), static_cast<unsigned __int32>(sysThread.ClientId.UniqueThread), sysThread.StartAddress); currentProc->AddThread(thread); } processes.push_back(currentProc); } while(asSysInfo->NextEntryDelta != 0); } My problem is in mocking out the NtDll::NtQuerySystemInformation method -- namely, that the data structure returned is complicated (Well, here it's actually relatively simple but it can be complicated), and writing a test which builds the data structure like the API call does can take 5-6 times as long as writing the code that uses the API. What I'd like to do is take a call to the API, and record it somehow, so that I can return that recorded value to the code under test without actually calling the API. The returned structures cannot simply be memcpy'd, because they often contain inner pointers (pointers to other locations in the same buffer). The library in question would need to check for these kinds of things, and be able to restore pointer values to a similar buffer upon replay. (i.e. check each pointer sized value if it could be interpreted as a pointer within the buffer, change that to an offset, and remember to change it back to a pointer on replay -- a false positive rate here is acceptable) Is there anything out there that does anything like this?

    Read the article

  • How to open a server port outside of an OpenVPN tunnel with a pf firewall on OSX (BSD)

    - by Timbo
    I have a Mac mini that I use as a media server running XBMC and serves media from my NAS to my stereo and TV (which has been color calibrated with a Spyder3Express, happy). The Mac runs OSX 10.8.2 and the internet connection is tunneled for general privacy over OpenVPN through Tunnelblick. I believe my anonymous VPN provider pushes "redirect_gateway" to OpenVPN/Tunnelblick because when on it effectively tunnels all non-LAN traffic in- and outbound. As an unwanted side effect that also opens the boxes server ports unprotected to the outside world and bypasses my firewall-router (Netgear SRX5308). I have run nmap from outside the LAN on the VPN IP and the server ports on the mini are clearly visible and connectable. The mini has the following ports open: ssh/22, ARD/5900 and 8080+9090 for the XBMC iOS client Constellation. I also have Synology NAS which apart from LAN file serving over AFP and WebDAV only serves up an OpenVPN/1194 and a PPTP/1732 server. When outside of the LAN I connect to this from my laptop over OpenVPN and over PPTP from my iPhone. I only want to connect through AFP/548 from the mini to the NAS. The border firewall (SRX5308) just works excellently, stable and with a very high throughput when streaming from various VOD services. My connection is a 100/10 with a close to theoretical max throughput. The ruleset is as follows Inbound: PPTP/1723 Allow always to 10.0.0.40 (NAS/VPN server) from a restricted IP range >corresponding to possible cell provider range OpenVPN/1194 Allow always to 10.0.0.40 (NAS/VPN server) from any Outbound: Default outbound policy: Allow Always OpenVPN/1194 TCP Allow always from 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) OpenVPN/1194 UDP Allow always to 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) Block always from NAS to any On the Mini I have disabled the OSX Application Level Firewall because it throws popups which don't remember my choices from one time to another and that's annoying on a media server. Instead I run Little Snitch which controls outgoing connections nicely on an application level. I have configured the excellent OSX builtin firewall pf (from BSD) as follows pf.conf (Apple App firewall tie-ins removed) (# replaced with % to avoid formatting errors) ### macro name for external interface. eth_if = "en0" vpn_if = "tap0" ### wifi_if = "en1" ### %usb_if = "en3" ext_if = $eth_if LAN="{10.0.0.0/24}" ### General housekeeping rules ### ### Drop all blocked packets silently set block-policy drop ### all incoming traffic on external interface is normalized and fragmented ### packets are reassembled. scrub in on $ext_if all fragment reassemble scrub in on $vpn_if all fragment reassemble scrub out all ### exercise antispoofing on the external interface, but add the local ### loopback interface as an exception, to prevent services utilizing the ### local loop from being blocked accidentally. ### set skip on lo0 antispoof for $ext_if inet antispoof for $vpn_if inet ### spoofing protection for all interfaces block in quick from urpf-failed ############################# block all ### Access to the mini server over ssh/22 and remote desktop/5900 from LAN/en0 only pass in on $eth_if proto tcp from $LAN to any port {22, 5900, 8080, 9090} ### Allow all udp and icmp also, necessary for Constellation. Could be tightened. pass on $eth_if proto {udp, icmp} from $LAN to any ### Allow AFP to 10.0.0.40 (NAS) pass out on $eth_if proto tcp from any to 10.0.0.40 port 548 ### Allow OpenVPN tunnel setup over unprotected link (en0) only to VPN provider IPs ### and port ranges pass on $eth_if proto tcp from any to a.b.8.0/24 port 1194:1201 ### OpenVPN Tunnel rules. All traffic allowed out, only in to ports 4100-4110 ### Outgoing pings ok pass in on $vpn_if proto {tcp, udp} from any to any port 4100:4110 pass out on $vpn_if proto {tcp, udp, icmp} from any to any So what are my goals and what does the above setup achieve? (until you tell me otherwise :) 1) Full LAN access to the above ports on the mini/media server (including through my own VPN server) 2) All internet traffic from the mini/media server is anonymized and tunneled over VPN 3) If OpenVPN/Tunnelblick on the mini drops the connection, nothing is leaked both because of pf and the router outgoing ruleset. It can't even do a DNS lookup through the router. So what do I have to hide with all this? Nothing much really, I just got carried away trying to stop port scans through the VPN tunnel :) In any case this setup works perfectly and it is very stable. The Problem at last! I want to run a minecraft server and I installed that on a separate user account on the mini server (user=mc) to keep things partitioned. I don't want this server accessible through the anonymized VPN tunnel because there are lots more port scans and hacking attempts through that than over my regular IP and I don't trust java in general. So I added the following pf rule on the mini: ### Allow Minecraft public through user mc pass in on $eth_if proto {tcp,udp} from any to any port 24983 user mc pass out on $eth_if proto {tcp, udp} from any to any user mc And these additions on the border firewall: Inbound: Allow always TCP/UDP from any to 10.0.0.40 (NAS) Outbound: Allow always TCP port 80 from 10.0.0.40 to any (needed for online account checkups) This works fine but only when the OpenVPN/Tunnelblick tunnel is down. When up no connection is possbile to the minecraft server from outside of LAN. inside LAN is always OK. Everything else functions as intended. I believe the redirect_gateway push is close to the root of the problem, but I want to keep that specific VPN provider because of the fantastic throughput, price and service. The Solution? How can I open up the minecraft server port outside of the tunnel so it's only available over en0 not the VPN tunnel? Should I a static route? But I don't know which IPs will be connecting...stumbles How secure would to estimate this setup to be and do you have other improvements to share? I've searched extensively in the last few days to no avail...If you've read this far I bet you know the answer :)

    Read the article

  • How can I hit my database with an AJAX call using javascript?

    - by tmedge
    I am pretty new at this stuff, so bear with me. I am using ASP.NET MVC. I have created an overlay to cover the page when someone clicks a button corresponding to a certain database entry. Because of this, ALL of my code for this functionality is in a .js file contained within my project. What I need to do is pull the info corresponding to my entry from the database itself using an AJAX call, and place that into my textboxes. Then, after the end-user has made the desired changes, I need to update that entry's values to match the input. I've been surfing the web for a while, and have failed to find an example that fits my needs effectively. Here is my code in my javascript file thus far: function editOverlay(picId) { //pull up an overlay $('body').append('<div class="overlay" />'); var $overlayClass = $('.overlay'); $overlayClass.append('<div class="dataModal" />'); var $data = $('.dataModal'); overlaySetup($overlayClass, $data); //set up form $data.append('<h1>Edit Picture</h1><br /><br />'); $data.append('Picture name: &nbsp;'); $data.append('<input class="picName" /> <br /><br /><br />'); $data.append('Relative url: &nbsp;'); $data.append('<input class="picRelURL" /> <br /><br /><br />'); $data.append('Description: &nbsp;'); $data.append('<textarea class="picDescription" /> <br /><br /><br />'); var $nameBox = $('.picName'); var $urlBox = $('.picRelURL'); var $descBox = $('.picDescription'); var pic = null; //this is where I need to pull the actual object from the db //var imgList = for (var temp in imgList) { if (temp.Id == picId) { pic= temp; } } /* $nameBox.attr('value', pic.Name); $urlBox.attr('value', pic.RelativeURL); $descBox.attr('value', pic.Description); */ //close buttons $data.append('<input type="button" value="Save Changes" class="saveButton" />'); $data.append('<input type="button" value="Cancel" class="cancelButton" />'); $('.saveButton').click(function() { /* pic.Name = $nameBox.attr('value'); pic.RelativeURL = $urlBox.attr('value'); pic.Description = $descBox.attr('value'); */ //make a call to my Save() method in my repository CloseOverlay(); }); $('.cancelButton').click(function() { CloseOverlay(); }); } The stuff I have commented out is what I need to accomplish and/or is not available until prior issues are resolved. Any and all advice is appreciated! Remember, I am VERY new to this stuff (two weeks, to be exact) and will probably need highly explicit instructions. BTW: overlaySetup() and CloseOverlay() are functions I have living someplace else. Thanks!

    Read the article

  • Create a named cell dynamically

    - by CaptMorgan
    I have a workbook with 3 worksheets. 1 worksheet will have input values (not created at the moment and not needed for this question), 1 worksheet with several "template" or "source" tables, and the last worksheet has 4 formatted "target" tables (empty or not doesn't matter). Each template table has 3 columns, 1 column identifying what the values are for in the second 2 columns. The value columns have formulas in them and each cell is Named. The formulas use the cell Names rather than cell address (e.g. MyData1 instead of C2). I am trying to copy the templates into the target tables while also either copying the cell Names from the source into the targets or create the Names in the target tables based on the source cell Names. My code below I am creating the target names by using a "base" in the Name that will be changed depending on which target table it gets copied to. my sample tables have "Num0_" for a base in all the cell names (e.g. Num0_MyData1, Num0_SomeOtherData2, etc). Once the copy has completed the code will then name the cells by looking at the target Names (and address), replacing the base of the name with a new base, just adding a number of which target table it goes to, and replacing the sheet name in the address. Here's where I need help. The way I am changing that address will only work if my template and target are using the same cell addresses of their perspective sheets. Which they are not. (e.g. Template1 table has value cells, each named, of B2 thru C10, and my target table for the copy may be F52 thur G60). Bottom line I need to figure out how to copy those names over with the templates or name the cells dynamically by doing something like a replace where I am incrementing the address value based on my target table #...remember I have 4 target tables which are static, I will only copy to those areas. I am a newbie to vba so any suggestions or help is appreciated. NOTE: The copying of the table works as I want. It even names the cells (if the Template and Target Table have the same local worksheet cell address (e.g. C2) 'Declare Module level variables 'Variables for target tables are defined in sub's for each target table. Dim cellName As Name Dim newName As String Dim newAddress As String Dim newSheetVar Dim oldSheetVar Dim oldNameVar Dim srcTable1 Sub copyTables() newSheetVar = "TestSheet" oldSheetVar = "Templates" oldNameVar = "Num0_" srcTable1 = "TestTableTemplate" 'Call sub functions to copy tables, name cells and update functions. copySrc1Table copySrc2Table End Sub '****there is another sub identical to this one below for copySrc2Table. Sub copySrc1Table() newNameVar = "Num1_" trgTable1 = "SourceEnvTable1" Sheets(oldSheetVar).Select Range(srcTable1).Select Selection.Copy For Each cellName In ActiveWorkbook.Names 'Find all names with common value If cellName.Name Like oldNameVar & "*" Then 'Replace the common value with the update value you need newName = Replace(cellName.Name, oldNameVar, newNameVar) newAddress = Replace(cellName.RefersTo, oldSheetVar, newSheetVar) 'Edit the name of the name. This will change any formulas using this name as well ActiveWorkbook.Names.Add Name:=newName, RefersTo:=newAddress End If Next cellName Sheets(newSheetVar).Select Range(trgTable1).Select Selection.PasteSpecial Paste:=xlPasteAll, Operation:=xlNone, SkipBlanks:= _ False, Transpose:=False End Sub PING

    Read the article

  • How do i route TCP connections via TOR? [on hold]

    - by acidzombie24
    I was reading about torchat which is essentially an anonymous chat program. It sounded cool so i wanted to experiment with making my own. First i wrote a test to grab a webpage using Http. Sicne .NET doesnt support SOCKS4A/SOCKS5 i used privoxy and my app worked. Then i switch to a TCP echo test and privoxy doesnt support TCP so i searched and installed 6+ proxy apps (freecap, socat, freeproxy, delegate are the ones i can remember from the top of my head, i also played with putty bc i know it supports tunnels and SOCK5) but i couldnt successfully get any of them to work let alone get it running with my http test that privoxy easily and painlessly did. What may i use to get TCP connections going through TOR? I spent more then 2 hours without success. I don't know if i am looking for a relay, tunnel, forwarder, proxy or a proxychain which all came up in my search. I use the config below for .NET. I need TCP working but i am first testing with http since i know i had it working using privoxy. What apps and configs do i use to get TCP going through tor? <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.net> <defaultProxy enabled="true"> <proxy bypassonlocal="True" proxyaddress="http://127.0.0.1:8118"/> </defaultProxy> <settings> <httpWebRequest useUnsafeHeaderParsing="true"/> </settings> </system.net> </configuration> -edit- Thanks to Bernd i have a solution. Here is the code i ended up writing. It isn't amazing but its fair. static NetworkStream ConnectSocksProxy(string proxyDomain, short proxyPort, string host, short hostPort, TcpClient tc) { tc.Connect(proxyDomain, proxyPort); if (System.Text.RegularExpressions.Regex.IsMatch(host, @"[\:/\\]")) throw new Exception("Invalid Host name. Use FQDN such as www.google.com. Do not have http, a port or / in it"); NetworkStream ns = tc.GetStream(); var HostNameBuf = new ASCIIEncoding().GetBytes(host); var HostPortBuf = BitConverter.GetBytes(IPAddress.HostToNetworkOrder(hostPort)); if (true) //5 { var bufout = new byte[128]; var buflen = 0; ns.Write(new byte[] { 5, 1, 0 }, 0, 3); buflen = ns.Read(bufout, 0, bufout.Length); if (buflen != 2 || bufout[0] != 5 || bufout[1] != 0) throw new Exception(); var buf = new byte[] { 5, 1, 0, 3, (byte)HostNameBuf.Length }; var mem = new MemoryStream(); mem.Write(buf, 0, buf.Length); mem.Write(HostNameBuf, 0, HostNameBuf.Length); mem.Write(new byte[] { HostPortBuf[0], HostPortBuf[1] }, 0, 2); var memarr = mem.ToArray(); ns.Write(memarr, 0, memarr.Length); buflen = ns.Read(bufout, 0, bufout.Length); if (bufout[0] != 5 || bufout[1] != 0) throw new Exception(); } else //4a { var bufout = new byte[128]; var buflen = 0; var mem = new MemoryStream(); mem.WriteByte(4); mem.WriteByte(1); mem.Write(HostPortBuf, 0, 2); mem.Write(BitConverter.GetBytes(IPAddress.HostToNetworkOrder(1)), 0, 4); mem.WriteByte(0); mem.Write(HostNameBuf, 0, HostNameBuf.Length); mem.WriteByte(0); var memarr = mem.ToArray(); ns.Write(memarr, 0, memarr.Length); buflen = ns.Read(bufout, 0, bufout.Length); if (buflen != 8 || bufout[0] != 0 || bufout[1] != 90) throw new Exception(); } return ns; } Usage using (TcpClient client = new TcpClient()) using (var ns = ConnectSocksProxy("127.0.0.1", 9050, "website.com", 80, client)) {...}

    Read the article

  • Why does one of two identical Javascripts work in Firefox?

    - by Gigpacknaxe
    Hi, I have two image swap functions and one works in Firefox and the other does not. The swap functions are identical and both work fine in IE. Firefox does not even recognize the images as hyperlinks. I am very confused and I hope some one can shed some light on this for me. Thank you very much in advance for any and all help. FYI: the working script swaps by onClick via DIV elements and the non-working script swaps onMouseOver/Out via "a" elements. Remember both of these work just fine in IE. Joshua Working Javascript in FF: <script type="text/javascript"> var aryImages = new Array(); aryImages[1] = "/tires/images/mich_prim_mxv4_profile.jpg"; aryImages[2] = "/tires/images/mich_prim_mxv4_tread.jpg"; aryImages[3] = "/tires/images/mich_prim_mxv4_side.jpg"; for (i=0; i < aryImages.length; i++) { var preload = new Image(); preload.src = aryImages[i]; } function swap(imgIndex, imgTarget) { document[imgTarget].src = aryImages[imgIndex]; } <div id="image-container"> <div style="text-align: right">Click small images below to view larger.</div> <div class="thumb-box" onclick="swap(1, 'imgColor')"><img src="/tires/images/thumbs/mich_prim_mxv4_profile_thumb.jpg" width="75" height="75" /></div> <div class="thumb-box" onclick="swap(2, 'imgColor')"><img src="/tires/images/thumbs/mich_prim_mxv4_tread_thumb.jpg" width="75" height="75" /></div> <div class="thumb-box" onclick="swap(3, 'imgColor')"><img src="/tires/images/thumbs/mich_prim_mxv4_side_thumb.jpg" width="75" height="75" /></div> <div><img alt="" name="imgColor" src="/tires/images/mich_prim_mxv4_profile.jpg" /></div> <div><a href="mich-prim-102-large.php"><img src="/tires/images/super_view.jpg" border="0" /></a></div> Not Working in FF: <script type="text/javascript"> var aryImages = new Array(); aryImages[1] = "/images/home-on.jpg"; aryImages[2] = "/images/home-off.jpg"; aryImages[3] = "/images/services-on.jpg"; aryImages[4] = "/images/services-off.jpg"; aryImages[5] = "/images/contact_us-on.jpg"; aryImages[6] = "/images/contact_us-off.jpg"; aryImages[7] = "/images/about_us-on.jpg"; aryImages[8] = "/images/about_us-off.jpg"; aryImages[9] = "/images/career-on.jpg"; aryImages[10] = "/images/career-off.jpg"; for (i=0; i < aryImages.length; i++) { var preload = new Image(); preload.src = aryImages[i]; } function swap(imgIndex, imgTarget) { document[imgTarget].src = aryImages[imgIndex]; } <td> <a href="home.php" onMouseOver="swap(1, 'home')" onMouseOut="swap(2, 'home')"><img name="home" src="/images/home-off.jpg" alt="Home Button" border="0px" /></a> </td>

    Read the article

  • Override `drop` for a custom sequence

    - by Bruno Reis
    In short: in Clojure, is there a way to redefine a function from the standard sequence API (which is not defined on any interface like ISeq, IndexedSeq, etc) on a custom sequence type I wrote? 1. Huge data files I have big files in the following format: A long (8 bytes) containing the number n of entries n entries, each one being composed of 3 longs (ie, 24 bytes) 2. Custom sequence I want to have a sequence on these entries. Since I cannot usually hold all the data in memory at once, and I want fast sequential access on it, I wrote a class similar to the following: (deftype DataSeq [id ^long cnt ^long i cached-seq] clojure.lang.IndexedSeq (index [_] i) (count [_] (- cnt i)) (seq [this] this) (first [_] (first cached-seq)) (more [this] (if-let [s (next this)] s '())) (next [_] (if (not= (inc i) cnt) (if (next cached-seq) (DataSeq. id cnt (inc i) (next cached-seq)) (DataSeq. id cnt (inc i) (with-open [f (open-data-file id)] ; open a memory mapped byte array on the file ; seek to the exact position to begin reading ; decide on an optimal amount of data to read ; eagerly read and return that amount of data )))))) The main idea is to read ahead a bunch of entries in a list and then consume from that list. Whenever the cache is completely consumed, if there are remaining entries, they are read from the file in a new cache list. Simple as that. To create an instance of such a sequence, I use a very simple function like: (defn ^DataSeq load-data [id] (next (DataSeq. id (count-entries id) -1 []))) ; count-entries is a trivial "open file and read a long" memoized As you can see, the format of the data allowed me to implement count in very simply and efficiently. 3. drop could be O(1) In the same spirit, I'd like to reimplement drop. The format of these data files allows me to reimplement drop in O(1) (instead of the standard O(n)), as follows: if dropping less then the remaining cached items, just drop the same amount from the cache and done; if dropping more than cnt, then just return the empty list. otherwise, just figure out the position in the data file, jump right into that position, and read data from there. My difficulty is that drop is not implemented in the same way as count, first, seq, etc. The latter functions call a similarly named static method in RT which, in turn, calls my implementation above, while the former, drop, does not check if the instance of the sequence it is being called on provides a custom implementation. Obviously, I could provide a function named anything but drop that does exactly what I want, but that would force other people (including my future self) to remember to use it instead of drop every single time, which sucks. So, the question is: is it possible to override the default behaviour of drop? 4. A workaround (I dislike) While writing this question, I've just figured out a possible workaround: make the reading even lazier. The custom sequence would just keep an index and postpone the reading operation, that would happen only when first was called. The problem is that I'd need some mutable state: the first call to first would cause some data to be read into a cache, all the subsequent calls would return data from this cache. There would be a similar logic on next: if there's a cache, just next it; otherwise, don't bother populating it -- it will be done when first is called again. This would avoid unnecessary disk reads. However, this is still less than optimal -- it is still O(n), and it could easily be O(1). Anyways, I don't like this workaround, and my question is still open. Any thoughts? Thanks.

    Read the article

  • Function for counting characters/words not working

    - by user1742729
    <!DOCTYPE HTML> <html> <head> <title> Javascript - stuff </title> <script type="text/javascript"> <!-- function GetCountsAll( Wordcount, Sentancecount, Clausecount, Charactercount ) { var TextString = document.getElementById("Text").innerHTML; var Wordcount = 0; var Sentancecount = 0; var Clausecount = 0; var Charactercount = 0; // For loop that runs through all characters incrementing the variable(s) value each iteration for (i=0; i < TextString.length; i++); if (TextString.charAt(i) == " " = true) Wordcount++; return Wordcount; if (TextString.charAt(i) = "." = true) Sentancecount++; Clausecount++; return Sentancecount; if (TextString.charAt(i) = ";" = true) Clausecount++; return Clausecount; } --> </script> </head> <body> <div id="Text"> It is important to remember that XHTML is a markup language; it is not a programming language. The language only describes the placement and visual appearance of elements arranged on a page; it does not permit users to manipulate these elements to change their placement or appearance, or to perform any "processing" on the text or graphics to change their content in response to user needs. For many Web pages this lack of processing capability is not a great drawback; the pages are simply displays of static, unchanging, information for which no manipulation by the user is required. Still, there are cases where the ability to respond to user actions and the availability of processing methods can be a great asset. This is where JavaScript enters the picture. </div> <input type = "button" value = "Get Counts" class = "btnstyle" onclick = "GetCountsAll()"/> <br/> <span id= "Charactercount"> </span> Characters <br/> <span id= "Wordcount"> </span> Words <br/> <span id= "Sentancecount"> </span> Sentences <br/> <span id= "ClauseCount"> </span> Clauses <br/> </body> </html> I am a student and still learning JavaScript, so excuse any horrible mistakes. The script is meant to calculate the number of characters, words, sentences, and clauses in the passage. It's, plainly put, just not working. I have tried a multitude of things to get it to work for me and have gotten a plethora of different errors but no matter what I can NOT get this to work. Please help! (btw i know i misspelled sentence)

    Read the article

  • Dynamic JSON Parsing in .NET with JsonValue

    - by Rick Strahl
    So System.Json has been around for a while in Silverlight, but it's relatively new for the desktop .NET framework and now moving into the lime-light with the pending release of ASP.NET Web API which is bringing a ton of attention to server side JSON usage. The JsonValue, JsonObject and JsonArray objects are going to be pretty useful for Web API applications as they allow you dynamically create and parse JSON values without explicit .NET types to serialize from or into. But even more so I think JsonValue et al. are going to be very useful when consuming JSON APIs from various services. Yes I know C# is strongly typed, why in the world would you want to use dynamic values? So many times I've needed to retrieve a small morsel of information from a large service JSON response and rather than having to map the entire type structure of what that service returns, JsonValue actually allows me to cherry pick and only work with the values I'm interested in, without having to explicitly create everything up front. With JavaScriptSerializer or DataContractJsonSerializer you always need to have a strong type to de-serialize JSON data into. Wouldn't it be nice if no explicit type was required and you could just parse the JSON directly using a very easy to use object syntax? That's exactly what JsonValue, JsonObject and JsonArray accomplish using a JSON parser and some sweet use of dynamic sauce to make it easy to access in code. Creating JSON on the fly with JsonValue Let's start with creating JSON on the fly. It's super easy to create a dynamic object structure. JsonValue uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JsonValue:[TestMethod] public void JsonValueOutputTest() { // strong type instance var jsonObject = new JsonObject(); // dynamic expando instance you can add properties to dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1977; album.Songs = new JsonArray() as dynamic; dynamic song = new JsonObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JsonObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces proper JSON just as you would expect: {"AlbumName":"Dirty Deeds Done Dirt Cheap","Artist":"AC\/DC","YearReleased":1977,"Songs":[{"SongName":"Dirty Deeds Done Dirt Cheap","SongLength":"4:11"},{"SongName":"Love at First Feel","SongLength":"3:10"}]} The important thing about this code is that there's no explicitly type that is used for holding the values to serialize to JSON. I am essentially creating this value structure on the fly by adding properties and then serialize it to JSON. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JsonObject() to create a new object and immediately cast it to dynamic. JsonObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JsonValue/JsonObject these values are stored in pseudo collections of key value pairs that are exposed as properties through the DynamicObject functionality in .NET. The syntax gets a little tedious only if you need to create child objects or arrays that have to be explicitly defined first. Other than that the syntax looks like normal object access sytnax. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the values you create are accessed consistently and without typos in your code. Note that you can also access the JsonValue instance directly and get access to the underlying type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JsonObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JsonValue internally stores properties keys and values in collections and you can iterate over them at runtime. You can also manipulate the collections if you need to to get the object structure to look exactly like you want. Again, if you've used ExpandoObject before JsonObject/Value are very similar in the behavior of the structure. Reading JSON strings into JsonValue The JsonValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JsonValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:[TestMethod] public void JsonValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"",""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JsonValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JsonValue object and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JsonPrimitive and I have to assign them to their appropriate types first before I can do type comparisons. The dynamic properties will automatically cast to the right type expected as long as the compiler can resolve the type of the assignment or usage. The AreEqual() method oesn't as it expects two object instances and comparing json.Company to "West Wind" is comparing two different types (JsonPrimitive to String) which fails. So the intermediary assignment is required to make the test pass. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1977, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B00008BXJ4/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""67280fb8"", ""AlbumName"": ""Echoes, Silence, Patience & Grace"", ""Artist"": ""Foo Fighters"", ""YearReleased"": 2007, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/41mtlesQPVL._SL500_AA280_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B000UFAURI/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B000UFAURI"", ""Songs"": [ { ""AlbumId"": ""67280fb8"", ""SongName"": ""The Pretender"", ""SongLength"": ""4:29"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Let it Die"", ""SongLength"": ""4:05"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Erase/Replay"", ""SongLength"": ""4:13"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; dynamic albums = JsonValue.Parse(jsonString); foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName ); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName);}   It's pretty sweet how easy it becomes to parse even complex JSON and then just run through the object using object syntax, yet without an explicit type in the mix. In fact it looks and feels a lot like if you were using JavaScript to parse through this data, doesn't it? And that's the point…© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  JSON   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Remotely Schedule and Stream Recorded TV in Windows 7 Media Center

    - by DigitalGeekery
    Have you ever been away from home and suddenly realized you forgot to record your favorite program? Now Windows 7 Media Center, users can schedule recordings remotely from their phones or mobile devices with Remote Potato. How it Works Remote Potato installs server software on the host computer running Windows 7 Media Center. Once the software is installed, we’ll need to do some port forwarding on the router and setup an optional dynamic DNS address. When setup is completed, we will access the application through a web based interface. Silverlight is required for Streaming recorded TV, but scheduling recordings can be done through an HTML interface. Installing Remote Potato Download and install Remote Potato on the Media Center PC. (See download link below) If you plan to stream any Recorded TV, you’ll also want to install the streaming pack located on the same page. It isn’t required to stream all shows, only shows that require the AC3 audio codec. Click Yes to allow Remote Potato to add rules to the Windows Firewall for remote access. You’ll likely need to accept a few UAC prompts. When notified that the rules were added, click OK. Remote Potato will then prompt you to allow administrator privileges to reserve a URL for it’s web server. Click Yes. Remote Potato server will start. Click on the configuration button at the right to to reveal the settings tabs.   One the General tab, you’ll have the option to run Remote Potato on startup and minimized in the System Tray. If you’re running Media Center on a dedicated HTPC, you’ll probably want to enable both startup options. Forwarding Ports on Your Router You’ll need to forward a couple ports on your router. By default, these will be ports 9080 and 9081. In this example we’re using a Linksys WRT54GL router, however, the steps for port forwarding will vary from router to router. On the Linksys configuration page, click on the Applications & Gaming Tab, and then the Port Range Forward tab. Under Application, type in a name of your choosing. In both the Start and End boxes, type the port number 9080. Enter the local IP address of your Media Center computer in the IP address column. Click the check box under Enable. Repeat the process on the next line, but this time use port 9081. When finished, click the Save Settings button. Note: It’s highly recommended that you configure the home computer running Media Center & Remote Potato with a static IP address.   Find your IP Address You’ll need to find the IP address assigned to your router from your ISP. There are many ways to do this but a quick and easy way is to visit a site like checkip.dyndns.org (link available below) The current external IP address of your router will be displayed in the browser.   Dynamic DNS This is an optional step, but  it’s highly recommended. Many routers, such as the Linksys WRT54GL we are using, support Dynamic DNS (DDNS). What Dynamic DNS allows you to do is affiliate your home router’s external IP address to a domain name. Every time your home router is assigned a a new IP address by your ISP, the domain name is updated to point to your new IP address. Remote Potato’s user interface is accessed over the Internet is by connecting to your router’s IP address followed by a colon and the port number. (Ex: XXX.XXX.XXX.XXX:9080) Instead of constantly having to look up and remember an IP address, you can use DDNS along with a 3rd party provider like DynDNS.com, to sign up for a free domain name and configure it to be updated each time your router is assigned a new IP address. Go to the DynDNS.com website (See link at the end of the article) and sign up for a free Domain name. You’ll need to register and confirm by email.   Once you’ve signed in and selected your domain name click Activate Services. You’ll get a confirmation message that your domain name has been activated.    On the Linksys WRT54GL click on the Setup tab an then DDNS. Select DynDNS.org, or TZO.com if you prefer to use their service, from the drop down list.   With DynDNS, you’ll need to fill in your username and password you signed up with at the DynDNS website and the hostname you chose. Note: You can connect over your local network with the IP Address of the computer running Remote Potato followed by a colon and the port number. Ex: 192.168.1.2:9080 Logging in Remote Potato and Recording a Show Once you connect, you’ll see the start page. To view the TV listings, click on TV Guide. You’ll then see your guide listings. There are a few ways to navigate the listings. At the top left, you can click on any of the preset time buttons to jump to  the listings at that time of the day.  Click on the arrows to the right and left of the day and date at the top center to proceed to the previous or next day. Or, jump to a specific day with the date and date buttons at the top right.   To setup a recording, click on a program.   You can choose to record the individual show or the entire series by clicking on Record Show or Record Series.   Remote Potato on Mobile Devices Perhaps the coolest feature of Remote Potato is the ability to schedule recording from your phone or mobile device. Note: For any devices or computers without Silverlight, you will be prompted to view the HTML page. Select Browse Listings. Select your program to record. In the Program Details, select Record Show to record the single episode or Record Series to record all instances of the series. You will then see a red dot on the program listing to indicate that the show is scheduled for recording.   Streaming Recorded TV Click on Recorded TV from the home screen to access your previously recorded TV programs. Click on the selection you wish to stream. Click on Play. If you receive this error message, you’ll need to install the streaming pack for Remote Potato. This is found on the same download page as installation files. (See link below) The Begin from slider allows you to start playback from the start (by default) or a different time of the program by moving the slider. The Quality (bitrate) setting  allows you to choose the quality of the playback. We found the video quality on the Normal setting to be pretty lousy, and Low was just pointless. High was the best overall viewing experience as it provided smooth quality video playback. We experienced significant stuttering during playback using the Ultra High setting.   Click Start when you are ready to begin. When playback begins you’ll see a slider at the top right.   Move the slider left or right to increase or decrease the size of the video. There’s also a button to switch to full screen.   Media Center users who travel frequently or are always on the go will likely find Remote Potato to be a blessing. Since being released earlier this year, updates for Remote Potato have come fast and furious. The latest beta release includes support for streaming music and photos. If you like those nice network TV logos, check out our article on adding TV channel logos to Windows Media Center. Downloads and Links Download Remote Potato and Streaming Pack Find your IP address Sign Up for a Domain Name at DynDNS.com Similar Articles Productive Geek Tips Schedule Updates for Windows Media CenterUsing Netflix Watchnow in Windows Vista Media Center (Gmedia)Add a Sleep Timer to Windows 7 Media CenterStartup Customizations for Media Center in Windows 7Enable Media Streaming in Windows Home Server to Windows Media Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • Installing UCMA 3.0 and Creating a Communications Server "14"Trusted Application Pool

    A lot of setup and administration tasks have gotten a lot easier in Communications Server 14; one of them is building an application server to develop and run your UCMA 3.0 applications on. In this post, Ill walk you through installing the UCMA 3.0 Core SDK and creating a Trusted Application Pool on the server, thus adding it to the Communications Server 14 topology and allowing you to host and run UCMA 3.0 applications on it. Note: These instructions will change slightly as the bits get updated for the eventual Beta release I will update this post as soon as I get a chance to run this setup on a more recent build. Im doing the install on a simple Communications Server 14 topology consisting of the following Windows Server 2008 R2 Hyper-V images: DC Domain Controller ExchangeUM Exchange Server 2010 CS-SE Microsoft Communications Server 2010 Standard Edition TS Development machine Ill walk through setting up UCMA 3.0 on the TS VM, which is a fully patched Windows Server 2008 R2 machine that is joined to the Fabrikam domain.   Im also running Visual Studio 2010 on this VM because I intend to use it as a development machine.  In a future post, Ill walk through installing just the UCMA 3.0 run time to build a true production UCMA application server. Im making a couple of assumptions here: You have an existing CS 2010 site and cluster configured(well look at this in a future post) Youre starting with a fully patched Windows Server 2008 R2 machine The machine is joined to your domain This walkthrough was done in my Fabrikam VM environment but can easily be modified for your own environment. Installing the UCMA 3.0 SDK Lets start by installing the UCMA 3.0 SDK.  Run UcmaSdkWebDownload.msi to kick off the SDK installer package extract process. The installed package is extracted to C: >> Program Files >> Microsoft UCMA 3.0 >> SDK Installer Package.  Browse there and run setup.exe. Click Install to install the UCMA 3.0 Core SDK and Workflow SDK. Install Communications Server Core Components UCMA 3.0 introduces a new concept called Auto-provisioning, which is most easily explained from the developer point of view.  Remember what your app.config looked it in UCMA 2.0?  You had to store the application GRUU, the trusted contact SIP Uri, the port for your application, and the name of the certificate authority. Thats all gone with auto-provisioning all you need in your app.config is your ApplicationId, e.g.: urn:application:MyApplication. How does CS 2010 do this? All of the applications configuration data is associated with the applications id.  UCMA also queries a replicated copy of the Central Management Database to retrieve the applications configuration data and also the configuration data for any endpoints. In this step, well run Bootstrapper.exe to install the CS Core components, this checked for the following components and installs them if they are not already present: VcRedist Sqlexpress Sqlnativeclient Sqlbackcompat Ucmaredist OcsCore.msi Open a command window at C: >> Program Files >> Microsoft Communications Server 2010 >> Deployment and run the following command: Bootstrapper.exe /BootstrapReplica /MinCache /SourceDirectory:"%ProgramFiles%\Microsoft UCMA 3.0\SDK Installer Package\Prereq\BootstrapperCache" Create a New Trusted Application Pool The next step is to create a new trusted application pool for the new server.  Fire up the Communications Server Management Shell from Start >> Microsoft Communications Server 2010 >> Communications Server Management Shell and enter the following PowerShell command: New-CsTrustedApplicationPool -Identity <FQDN of Server> -Registrar <FQDN of CS Server> -Site <CS Site Name> Verify that the new server was added to the CS topology by running the following PowerShell command: (Get-CsTopology -AsXml).ToString() > Topology.xml This created a file called Topology.xml in the directory that you ran the command from.  Open the file and find the Clusters section and look for a node for the new server. The Cluster Fqdn is the name of your server, and note the name of the Site that this Cluster is a part of. <Cluster Fqdn="appsrv.fabrikam.com" RequiresReplication="true" RequiresSetup="true"> <ClusterId SiteId="UcMarketing2" Number="5" /> <Machine OrdinalInCluster="1" Fqdn="appsrv.fabrikam.com"> <NetInterface InterfaceSide="Primary" InterfaceNumber="1" IPAddress="0.0.0.0" /> </Machine> </Cluster> Configure CS Management Store Replication At this point, we have the CS Core components installed and the server configured as a trusted application pool.  We now need to set up replication so that the Central Management Store replicates down to the new server. From the Communications Server Management Shell, run the following PowerShell command to enable the Replica service on the new server: Enable-CSReplica The Replica service is enabled, but hasn't done anything yet. This can be verified by running the following PowerShell command to check the replication status for the various servers in the topology: Get-CSManagementStoreReplicationStatus You can see in the screenshot below that the UpToDate property of the new server is still False Run the following PowerShell command to force the replication to run: Invoke-CSManagementStoreReplicationStatus Run Get-CSManagementStoreReplicationStatus again to verify that the new service is now up to date Request and Set a New Certificate The last step in the process is to request a new certificate from the certificate authority on the domain and assign it to the new server. From the Communications Server Management Shell, run the following PowerShell command to request a new certificate: Request-CSCertificate -Action new -Type default -CA <Domain Controller FQDN>\<Certificate Authority> Setting the -Verbose switch on the cmdlet creates an Xml file with its output. Open the Xml file and copy the thumbprint of the generated certificate. <?xml version="1.0" encoding="utf-8"?> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Info Title="Connection" Time="20100512T212258">Data Source=(local)\rtclocal;Initial Catalog=xds;Integrated Security=True</Info> <Action Time="20100512T212258"> <Info Title="Certificate use" Time="20100512T212258">urn:certref:default</Info> <Info Title="Subject distinguished name" Time="20100512T212258">CN="appsrv2.fabrikam.com"</Info> <Info Time="20100512T212259">The certificate request is submitted to the Certification Authority dc.fabrikam.com\FabrikamCA.</Info> <Info Time="20100512T212259">The certificate was issued.</Info> <Info Time="20100512T212259">The certificate was imported with thumbprint AFC3C46E459C1A39AD06247676F3555826DBF705.</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command execution processing completed</Info> <Action Name="DeploymentXdsCmdlet.SaveCachedItems" Time="20100512T212259"> <Info Time="20100512T212259">0 updates</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command has completed</Info> </Action> </Action> Run the following PowerShell command to set the certificate: Set-CsCertificate -Type Default -Thumbprint <Thumbprint> Wrapping Up You now have a new UCMA 3.0 application server in your Communications Server 2010 server topology.  You can provision trusted applications and trusted application endpoints on the new server using the Communications Server 2010 Management Shell.  Well take a look at how to do that in another post. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >