Search Results

Search found 6144 results on 246 pages for 'ignore arguments'.

Page 233/246 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • Trouble with Code First DatabaseGenerated Composite Primary Key

    - by Nick Fleetwood
    This is a tad complicated, and please, I know all the arguments against natural PK's, so we don't need to have that discussion. using VS2012/MVC4/C#/CodeFirst So, the PK is based on the date and a corresponding digit together. So, a few rows created today would be like this: 20131019 1 20131019 2 And one created tomorrow: 20131020 1 This has to be automatically generated using C# or as a trigger or whatever. The user wouldn't input this. I did come up with a solution, but I'm having problems with it, and I'm a little stuck, hence the question. So, I have a model: public class MainOne { //[Key] //public int ID { get; set; } [Key][Column(Order=1)] [DatabaseGenerated(DatabaseGeneratedOption.None)] public string DocketDate { get; set; } [Key][Column(Order=2)] [DatabaseGenerated(DatabaseGeneratedOption.None)] public string DocketNumber { get; set; } [StringLength(3, ErrorMessage = "Corp Code must be three letters")] public string CorpCode { get; set; } [StringLength(4, ErrorMessage = "Corp Code must be four letters")] public string DocketStatus { get; set; } } After I finish the model, I create a new controller and views using VS2012 scaffolding. Then, what I'm doing is debugging to create the database, then adding the following instead of trigger after Code First creates the DB [I don't know if this is correct procedure]: CREATE TRIGGER AutoIncrement_Trigger ON [dbo].[MainOnes] instead OF INSERT AS BEGIN DECLARE @number INT SELECT @number=COUNT(*) FROM [dbo].[MainOnes] WHERE [DocketDate] = CONVERT(DATE, GETDATE()) INSERT INTO [dbo].[MainOnes] (DocketDate,DocketNumber,CorpCode,DocketStatus) SELECT (CONVERT(DATE, GETDATE ())),(@number+1),inserted.CorpCode,inserted.DocketStatus FROM inserted END And when I try to create a record, this is the error I'm getting: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: The object state cannot be changed. This exception may result from one or more of the primary key properties being set to null. Non-Added objects cannot have null primary key values. See inner exception for details. Now, what's interesting to me, is that after I stop debugging and I start again, everything is perfect. The trigger fired perfectly, so the composite PK is unique and perfect, and the data in other columns is intact. My guess is that EF is confused by the fact that there is seemingly no value for the PK until AFTER an insert command is given. Also, appearing to back this theory, is that when I try to edit on of the rows, in debug, I get the following error: The number of primary key values passed must match number of primary key values defined on the entity. Same error occurs if I try to pull the 'Details' or 'Delete' function. Any solution or ideas on how to pull this off? I'm pretty open to anything, even creating a hidden int PK. But it would seem redundant. EDIT 21OCT13 [HttpPost] public ActionResult Create(MainOne mainone) { if (ModelState.IsValid) { var countId = db.MainOnes.Count(d => d.DocketDate == mainone.DocketNumber); //assuming that the date field already has a value mainone.DocketNumber = countId + 1; //Cannot implicitly convert type int to string db.MainOnes.Add(mainone); db.SaveChanges(); return RedirectToAction("Index"); } return View(mainone); } EDIT 21OCT2013 FINAL CODE SOLUTION For anyone like me, who is constantly searching for clear and complete solutions. if (ModelState.IsValid) { String udate = DateTime.UtcNow.ToString("yyyy-MM-dd"); mainone.DocketDate = udate; var ddate = db.MainOnes.Count(d => d.DocketDate == mainone.DocketDate); //assuming that the date field already has a value mainone.DocketNumber = ddate + 1; db.MainOnes.Add(mainone); db.SaveChanges(); return RedirectToAction("Index"); }

    Read the article

  • How to figure out who owns a worker thread that is still running when my app exits?

    - by Dave
    Not long after upgrading to VS2010, my application won't shut down cleanly. If I close the app and then hit pause in the IDE, I see this: The problem is, there's no context. The call stack just says [External code], which isn't too helpful. Here's what I've done so far to try to narrow down the problem: deleted all extraneous plugins to minimize the number of worker threads launched set breakpoints in my code anywhere I create worker threads (and delegates + BeginInvoke, since I think they are labeled "Worker Thread" in the debugger anyway). None were hit. set IsBackground = true for all threads While I could do the next brute force step, which is to roll my code back to a point where this didn't happen and then look over all of the change logs, this isn't terribly efficient. Can anyone recommend a better way to figure this out, given the notable lack of information presented by the debugger? The only other things I can think of include: read up on WinDbg and try to use it to stop anytime a thread is started. At least, I thought that was possible... :) comment out huge blocks of code until the app closes properly, then start uncommenting until it doesn't. UPDATE Perhaps this information will be of use. I decided to use WinDbg and attach to my application. I then closed it, and switched to thread 0 and dumped the stack contents. Here's what I have: ThreadCount: 6 UnstartedThread: 0 BackgroundThread: 1 PendingThread: 0 DeadThread: 4 Hosted Runtime: no PreEmptive GC Alloc Lock ID OSID ThreadOBJ State GC Context Domain Count APT Exception 0 1 1c70 005a65c8 6020 Enabled 02dac6e0:02dad7f8 005a03c0 0 STA 2 2 1b20 005b1980 b220 Enabled 00000000:00000000 005a03c0 0 MTA (Finalizer) XXXX 3 08504048 19820 Enabled 00000000:00000000 005a03c0 0 Ukn XXXX 4 08504540 19820 Enabled 00000000:00000000 005a03c0 0 Ukn XXXX 5 08516a90 19820 Enabled 00000000:00000000 005a03c0 0 Ukn XXXX 6 08517260 19820 Enabled 00000000:00000000 005a03c0 0 Ukn 0:008> ~0s eax=c0674960 ebx=00000000 ecx=00000000 edx=00000000 esi=0040f320 edi=005a65c8 eip=76c37e47 esp=0040f23c ebp=0040f258 iopl=0 nv up ei pl nz na po nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000202 USER32!NtUserGetMessage+0x15: 76c37e47 83c404 add esp,4 0:000> !clrstack OS Thread Id: 0x1c70 (0) Child SP IP Call Site 0040f274 76c37e47 [InlinedCallFrame: 0040f274] 0040f270 6baa8976 DomainBoundILStubClass.IL_STUB_PInvoke(System.Windows.Interop.MSG ByRef, System.Runtime.InteropServices.HandleRef, Int32, Int32)*** WARNING: Unable to verify checksum for C:\Windows\assembly\NativeImages_v4.0.30319_32\WindowsBase\d17606e813f01376bd0def23726ecc62\WindowsBase.ni.dll 0040f274 6ba924c5 [InlinedCallFrame: 0040f274] MS.Win32.UnsafeNativeMethods.IntGetMessageW(System.Windows.Interop.MSG ByRef, System.Runtime.InteropServices.HandleRef, Int32, Int32) 0040f2c4 6ba924c5 MS.Win32.UnsafeNativeMethods.GetMessageW(System.Windows.Interop.MSG ByRef, System.Runtime.InteropServices.HandleRef, Int32, Int32) 0040f2dc 6ba8e5f8 System.Windows.Threading.Dispatcher.GetMessage(System.Windows.Interop.MSG ByRef, IntPtr, Int32, Int32) 0040f318 6ba8d579 System.Windows.Threading.Dispatcher.PushFrameImpl(System.Windows.Threading.DispatcherFrame) 0040f368 6ba8d2a1 System.Windows.Threading.Dispatcher.PushFrame(System.Windows.Threading.DispatcherFrame) 0040f374 6ba7fba0 System.Windows.Threading.Dispatcher.Run() 0040f380 62e6ccbb System.Windows.Application.RunDispatcher(System.Object)*** WARNING: Unable to verify checksum for C:\Windows\assembly\NativeImages_v4.0.30319_32\PresentationFramewo#\7f91eecda3ff7ce478146b6458580c98\PresentationFramework.ni.dll 0040f38c 62e6c8ff System.Windows.Application.RunInternal(System.Windows.Window) 0040f3b0 62e6c682 System.Windows.Application.Run(System.Windows.Window) 0040f3c0 62e6c30b System.Windows.Application.Run() 0040f3cc 001f00bc MyApplication.App.Main() [C:\code\trunk\MyApplication\obj\Debug\GeneratedInternalTypeHelper.g.cs @ 24] 0040f608 66c421db [GCFrame: 0040f608] EDIT -- not sure if this helps, but the main thread's call stack looks like this: [Managed to Native Transition] > WindowsBase.dll!MS.Win32.UnsafeNativeMethods.GetMessageW(ref System.Windows.Interop.MSG msg, System.Runtime.InteropServices.HandleRef hWnd, int uMsgFilterMin, int uMsgFilterMax) + 0x15 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.GetMessage(ref System.Windows.Interop.MSG msg, System.IntPtr hwnd, int minMessage, int maxMessage) + 0x48 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.PushFrameImpl(System.Windows.Threading.DispatcherFrame frame = {System.Windows.Threading.DispatcherFrame}) + 0x85 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.PushFrame(System.Windows.Threading.DispatcherFrame frame) + 0x49 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.Run() + 0x4c bytes PresentationFramework.dll!System.Windows.Application.RunDispatcher(object ignore) + 0x17 bytes PresentationFramework.dll!System.Windows.Application.RunInternal(System.Windows.Window window) + 0x6f bytes PresentationFramework.dll!System.Windows.Application.Run(System.Windows.Window window) + 0x26 bytes PresentationFramework.dll!System.Windows.Application.Run() + 0x1b bytes I did a search on it and found some posts related to WPF GUIs hanging, and maybe that'll give me some more clues.

    Read the article

  • hello-1.mod.c:14: warning: missing initializer (near initialization for '__this_module.arch.unw_sec_init')

    - by Sompom
    I am trying to write a module for an sbc1651. Since the device is ARM, this requires a cross-compile. As a start, I am trying to compile the "Hello Kernel" module found here. This compiles fine on my x86 development system, but when I try to cross-compile I get the below error. /home/developer/HelloKernel/hello-1.mod.c:14: warning: missing initializer /home/developer/HelloKernel/hello-1.mod.c:14: warning: (near initialization for '__this_module.arch.unw_sec_init') Since this is in the .mod.c file, which is autogenerated I have no idea what's going on. The mod.c file seems to be generated by the module.h file. As far as I can tell, the relevant parts are the same between my x86 system's module.h and the arm kernel header's module.h. Adding to my confusion, this problem is either not googleable (by me...) or hasn't happened to anyone before. Or I'm just doing something clueless that anyone with any sense wouldn't do. The cross-compiler I'm using was supplied by Freescale (I think). I suppose it could be a problem with the compiler. Would it be worth trying to build the toolchain myself? Obviously, since this is a warning, I could ignore it, but since it's so strange, I am worried about it, and would like to at least know the cause... Thanks very much, Sompom Here are the source files hello-1.mod.c #include <linux/module.h> #include <linux/vermagic.h> #include <linux/compiler.h> MODULE_INFO(vermagic, VERMAGIC_STRING); struct module __this_module __attribute__((section(".gnu.linkonce.this_module"))) = { .name = KBUILD_MODNAME, .init = init_module, #ifdef CONFIG_MODULE_UNLOAD .exit = cleanup_module, #endif .arch = MODULE_ARCH_INIT, }; static const struct modversion_info ____versions[] __used __attribute__((section("__versions"))) = { { 0x3972220f, "module_layout" }, { 0xefd6cf06, "__aeabi_unwind_cpp_pr0" }, { 0xea147363, "printk" }, }; static const char __module_depends[] __used __attribute__((section(".modinfo"))) = "depends="; hello-1.c (modified slightly from the given link) /* hello-1.c - The simplest kernel module. * * Copyright (C) 2001 by Peter Jay Salzman * * 08/02/2006 - Updated by Rodrigo Rubira Branco <[email protected]> */ /* Kernel Programming */ #ifndef MODULE #define MODULE #endif #ifndef LINUX #define LINUX #endif #ifndef __KERNEL__ #define __KERNEL__ #endif #include <linux/module.h> /* Needed by all modules */ #include <linux/kernel.h> /* Needed for KERN_ALERT */ static int hello_init_module(void) { printk(KERN_ALERT "Hello world 1.\n"); /* A non 0 return means init_module failed; module can't be loaded.*/ return 0; } static void hello_cleanup_module(void) { printk(KERN_ALERT "Goodbye world 1.\n"); } module_init(hello_init_module); module_exit(hello_cleanup_module); MODULE_LICENSE("GPL"); Makefile export ARCH:=arm export CCPREFIX:=/opt/freescale/usr/local/gcc-4.4.4-glibc-2.11.1-multilib-1.0/arm-fsl-linux-gnueabi/bin/arm-linux- export CROSS_COMPILE:=${CCPREFIX} TARGET := hello-1 WARN := -W -Wall -Wstrict-prototypes -Wmissing-prototypes -Wno-sign-compare -Wno-unused -Werror UNUSED_FLAGS := -std=c99 -pedantic EXTRA_CFLAGS := -O2 -DMODULE -D__KERNEL__ ${WARN} ${INCLUDE} KDIR ?= /home/developer/src/ltib-microsys/ltib/rpm/BUILD/linux-2.6.35.3 ifneq ($(KERNELRELEASE),) # kbuild part of makefile obj-m := $(TARGET).o else # normal makefile default: clean $(MAKE) -C $(KDIR) M=$$PWD .PHONY: clean clean: -rm built-in.o -rm $(TARGET).ko -rm $(TARGET).ko.unsigned -rm $(TARGET).mod.c -rm $(TARGET).mod.o -rm $(TARGET).o -rm modules.order -rm Module.symvers endif

    Read the article

  • Java calendar day_of_week not working

    - by Raptrex
    I have a for loop starting at startTime going up to endTime and I would like it to print out the date if it is either a monday, tuesday, wednesday, thursday, or friday. Currently, it is only printing out the endTime date. The other stuff splits the string, which you can ignore. Since 5/16/2010 is a sunday, it should print out 17,18,19,20,21, 24 and 25. However it only prints 25 import java.util.*; public class test { public static void main(String[] args) { String startTime = "5/16/2010 11:44 AM"; String endTime = "5/25/2010 12:00 PM"; GregorianCalendar startCal = new GregorianCalendar(); startCal.setLenient(true); String[] start = splitString(startTime); //this sets year, month day startCal.set(Integer.parseInt(start[2]),Integer.parseInt(start[0])-1,Integer.parseInt(start[1])); startCal.set(GregorianCalendar.HOUR, Integer.parseInt(start[3])); startCal.set(GregorianCalendar.MINUTE, Integer.parseInt(start[4])); if (start[5].equalsIgnoreCase("AM")) { startCal.set(GregorianCalendar.AM_PM, 0); } else { startCal.set(GregorianCalendar.AM_PM, 1); } GregorianCalendar endCal = new GregorianCalendar(); endCal.setLenient(true); String[] end = splitString(endTime); endCal.set(Integer.parseInt(end[2]),Integer.parseInt(end[0])-1,Integer.parseInt(end[1])); endCal.set(GregorianCalendar.HOUR, Integer.parseInt(end[3])); endCal.set(GregorianCalendar.MINUTE, Integer.parseInt(end[4])); if (end[5].equalsIgnoreCase("AM")) { endCal.set(GregorianCalendar.AM_PM, 0); } else { endCal.set(GregorianCalendar.AM_PM, 1); } for (int i = startCal.get(Calendar.DATE); i < endCal.get(Calendar.DATE); i++) { if (startCal.get(Calendar.DAY_OF_WEEK) == Calendar.MONDAY || startCal.get(Calendar.DAY_OF_WEEK) == Calendar.TUESDAY || startCal.get(Calendar.DAY_OF_WEEK) == Calendar.WEDNESDAY || startCal.get(Calendar.DAY_OF_WEEK) == Calendar.THURSDAY || startCal.get(Calendar.DAY_OF_WEEK) == Calendar.FRIDAY) { startCal.set(Calendar.DATE, i); System.out.println(startCal.get(Calendar.DATE)); } } } private static String[] splitDate(String date) { String[] temp1 = date.split(" "); // split by space String[] temp2 = temp1[0].split("/"); // split by / //5/21/2010 10:00 AM return temp2; // return 5 21 2010 in one array } private static String[] splitTime(String date) { String[] temp1 = date.split(" "); // split by space String[] temp2 = temp1[1].split(":"); // split by : //5/21/2010 10:00 AM String[] temp3 = {temp2[0], temp2[1], temp1[2]}; return temp3; // return 10 00 AM in one array } private static String[] splitString(String date) { String[] temp1 = splitDate(date); String[] temp2 = splitTime(date); String[] temp3 = new String[6]; return dateFill(temp3, temp2[0], temp2[1], temp2[2], temp1[0], temp1[1], temp1[2]); } private static String[] dateFill(String[] date, String hours, String minutes, String ampm, String month, String day, String year) { date[0] = month; date[1] = day; date[2] = year; date[3] = hours; date[4] = minutes; date[5] = ampm; return date; } private String dateString(String[] date) { //return month+" "+day+", "+year+" "+hours+":"+minutes+" "+ampm //5/21/2010 10:00 AM return date[3]+"/"+date[4]+"/ "+date[5]+" "+date[0]+":"+date[1]+" "+date[2]; } }

    Read the article

  • C++: Trouble with templates (C2064)

    - by Rosarch
    I'm having compiler errors, and I'm not sure why. What am I doing wrong here: Hangman.cpp: set<char> Hangman::incorrectGuesses() { // Hangman line 103 return Utils::findAll_if<char>(guesses.begin(), guesses.end(), &Hangman::isIncorrectGuess); } bool Hangman::isIncorrectGuess(char c) { return correctAnswer.find(c) == string::npos; } Utils.h: namespace Utils { void PrintLine(const string& line, int tabLevel = 0); string getTabs(int tabLevel); template<class result_t, class Predicate> std::set<result_t> findAll_if(typename std::set<result_t>::iterator begin, typename std::set<result_t>::iterator end, Predicate pred) { std::set<result_t> result; // utils line 16 return detail::findAll_if_rec<result_t>(begin, end, pred, result); } } namespace detail { template<class result_t, class Predicate> std::set<result_t> findAll_if_rec(typename std::set<result_t>::iterator begin, typename std::set<result_t>::iterator end, Predicate pred, std::set<result_t> result) { // utils line 25 typename std::set<result_t>::iterator nextResultElem = find_if(begin, end, pred); if (nextResultElem == end) { return result; } result.insert(*nextResultElem); return findAll_if_rec(++nextResultElem, end, pred, result); } } This produces the following compiler errors: algorithm(83): error C2064: term does not evaluate to a function taking 1 arguments algorithm(95) : see reference to function template instantiation '_InIt std::_Find_if<std::_Tree_unchecked_const_iterator<_Mytree>,_Pr>(_InIt,_InIt,_Pr)' being compiled 1> with 1> [ 1> _InIt=std::_Tree_unchecked_const_iterator<std::_Tree_val<std::_Tset_traits<char,std::less<char>,std::allocator<char>,false>>>, 1> _Mytree=std::_Tree_val<std::_Tset_traits<char,std::less<char>,std::allocator<char>,false>>, 1> _Pr=bool (__thiscall Hangman::* )(char) 1> ] utils.h(25) : see reference to function template instantiation '_InIt std::find_if<std::_Tree_const_iterator<_Mytree>,Predicate>(_InIt,_InIt,_Pr)' being compiled 1> with 1> [ 1> _InIt=std::_Tree_const_iterator<std::_Tree_val<std::_Tset_traits<char,std::less<char>,std::allocator<char>,false>>>, 1> _Mytree=std::_Tree_val<std::_Tset_traits<char,std::less<char>,std::allocator<char>,false>>, 1> Predicate=bool (__thiscall Hangman::* )(char), 1> _Pr=bool (__thiscall Hangman::* )(char) 1> ] utils.h(16) : see reference to function template instantiation 'std::set<_Kty> detail::findAll_if_rec<result_t,Predicate>(std::_Tree_const_iterator<_Mytree>,std::_Tree_const_iterator<_Mytree>,Predicate,std::set<_Kty>)' being compiled 1> with 1> [ 1> _Kty=char, 1> result_t=char, 1> Predicate=bool (__thiscall Hangman::* )(char), 1> _Mytree=std::_Tree_val<std::_Tset_traits<char,std::less<char>,std::allocator<char>,false>> 1> ] hangman.cpp(103) : see reference to function template instantiation 'std::set<_Kty> Utils::findAll_if<char,bool(__thiscall Hangman::* )(char)>(std::_Tree_const_iterator<_Mytree>,std::_Tree_const_iterator<_Mytree>,Predicate)' being compiled 1> with 1> [ 1> _Kty=char, 1> _Mytree=std::_Tree_val<std::_Tset_traits<char,std::less<char>,std::allocator<char>,false>>, 1> Predicate=bool (__thiscall Hangman::* )(char) 1> ]

    Read the article

  • Why one loop is performing better than other memory wise as well as performance wise?

    - by Mohit
    I have following two loops in C#, and I am running these loops for a collection with 10,000 records being downloaded with paging using "yield return" First foreach(var k in collection) { repo.Save(k); } Second var collectionEnum = collection.GetEnumerator(); while (collectionEnum.MoveNext()) { var k = collectionEnum.Current; repo.Save(k); k = null; } Seems like that the second loop consumes less memory and it faster than the first loop. Memory I understand may be because of k being set to null(Even though I am not sure). But how come it is faster than for each. Following is the actual code [Test] public void BechmarkForEach_Test() { bool isFirstTimeSync = true; Func<Contact, bool> afterProcessing = contactItem => { return true; }; var contactService = CreateSerivce("/administrator/components/com_civicrm"); var contactRepo = new ContactRepository(new Mock<ILogger>().Object); contactRepo.Drop(); contactRepo = new ContactRepository(new Mock<ILogger>().Object); Profile("For Each Profiling",1,()=>{ var localenumertaor=contactService.Download(); foreach (var item in localenumertaor) { if (isFirstTimeSync) item.StateFlag = 1; item.ClientTimeStamp = DateTime.UtcNow; if (item.StateFlag == 1) contactRepo.Insert(item); else contactRepo.Update(item); afterProcessing(item); } contactRepo.DeleteAll(); }); } [Test] public void BechmarkWhile_Test() { bool isFirstTimeSync = true; Func<Contact, bool> afterProcessing = contactItem => { return true; }; var contactService = CreateSerivce("/administrator/components/com_civicrm"); var contactRepo = new ContactRepository(new Mock<ILogger>().Object); contactRepo.Drop(); contactRepo = new ContactRepository(new Mock<ILogger>().Object); var itemsCollection = contactService.Download().GetEnumerator(); Profile("While Profiling", 1, () => { while (itemsCollection.MoveNext()) { var item = itemsCollection.Current; //if First time sync then ignore and overwrite the stateflag if (isFirstTimeSync) item.StateFlag = 1; item.ClientTimeStamp = DateTime.UtcNow; if (item.StateFlag == 1) contactRepo.Insert(item); else contactRepo.Update(item); afterProcessing(item); item = null; } contactRepo.DeleteAll(); }); } static void Profile(string description, int iterations, Action func) { // clean up GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); // warm up func(); var watch = Stopwatch.StartNew(); for (int i = 0; i < iterations; i++) { func(); } watch.Stop(); Console.Write(description); Console.WriteLine(" Time Elapsed {0} ms", watch.ElapsedMilliseconds); } I m using the micro bench marking, from a stackoverflow question itself benchmarking-small-code The time taken is For Each Profiling Time Elapsed 5249 ms While Profiling Time Elapsed 116 ms

    Read the article

  • REST WCF service locks thread when called using AJAX in an ASP.Net site

    - by Jupaol
    I have a WCF REST service consumed in an ASP.Net site, from a page, using AJAX. I want to be able to call methods from my service async, which means I will have callback handlers in my javascript code and when the methods finish, the output will be updated. The methods should run in different threads, because each method will take different time to complete their task I have the code semi-working, but something strange is happening because the first time I execute the code after compiling, it works, running each call in a different threads but subsequent calls blocs the service, in such a way that each method call has to wait until the last call ends in order to execute the next one. And they are running on the same thread. I have had the same problem before when I was using Page Methods, and I solved it by disabling the session in the page but I have not figured it out how to do the same when consuming WCF REST services Note: Methods complete time (running them async should take only 7 sec and the result should be: Execute1 - Execute3 - Execute2) Execute1 -- 2 sec Execute2 -- 7 sec Execute3 -- 4 sec Output After compiling Output subsequent calls (this is the problem) I will post the code...I'll try to simplify it as much as I can Service Contract [ServiceContract( SessionMode = SessionMode.NotAllowed )] public interface IMyService { // I have other 3 methods like these: Execute2 and Execute3 [OperationContract] [WebInvoke( RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json, UriTemplate = "/Execute1", Method = "POST")] string Execute1(string param); } [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] [ServiceBehavior( InstanceContextMode = InstanceContextMode.PerCall )] public class MyService : IMyService { // I have other 3 methods like these: Execute2 (7 sec) and Execute3(4 sec) public string Execute1(string param) { var t = Observable.Start(() => Thread.Sleep(2000), Scheduler.NewThread); t.First(); return string.Format("Execute1 on: {0} count: {1} at: {2} thread: {3}", param, "0", DateTime.Now.ToString(), Thread.CurrentThread.ManagedThreadId.ToString()); } } ASPX page <%@ Page EnableSessionState="False" Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="RestService._Default" %> <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> <script type="text/javascript"> function callMethodAsync(url, data) { $("#message").append("<br/>" + new Date()); $.ajax({ cache: false, type: "POST", async: true, url: url, data: '"de"', contentType: "application/json", dataType: "json", success: function (msg) { $("#message").append("<br/>&nbsp;&nbsp;&nbsp;" + msg); }, error: function (xhr) { alert(xhr.responseText); } }); } $(function () { $("#callMany").click(function () { $("#message").html(""); callMethodAsync("/Execute1", "hello"); callMethodAsync("/Execute2", "crazy"); callMethodAsync("/Execute3", "world"); }); }); </script> </asp:Content> <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <input type="button" id="callMany" value="Post Many" /> <div id="message"> </div> </asp:Content> Web.config (relevant) <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> <standardEndpoints> <webHttpEndpoint> <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true" /> </webHttpEndpoint> </standardEndpoints> </system.serviceModel> Global.asax void Application_Start(object sender, EventArgs e) { RouteTable.Routes.Ignore("{resource}.axd/{*pathInfo}"); RouteTable.Routes.Add(new ServiceRoute("", new WebServiceHostFactory(), typeof(MyService))); }

    Read the article

  • Handling a Long Running jsp request on the server using Ajax and threads

    - by John Blue
    I am trying to implement a solution for a long running process on the server where it is taking about 10 min to process a pdf generation request. The browser bored/timesout at the 5 mins. I was thinking to deal with this using a Ajax and threads. I am using regular javascript for ajax. But I am stuck with it. I have reached till the point where it sends the request to the servlet and the servlet starts the thread.Please see the below code public class HelloServlet extends javax.servlet.http.HttpServlet implements javax.servlet.Servlet { protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { } protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { System.out.println("POST request!!"); LongProcess longProcess = new LongProcess(); longProcess.setDaemon(true); longProcess.start(); request.getSession().setAttribute("longProcess", longProcess); request.getRequestDispatcher("index.jsp").forward(request, response); } } class LongProcess extends Thread { public void run() { System.out.println("Thread Started!!"); while (progress < 10) { try { sleep(2000); } catch (InterruptedException ignore) {} progress++; } } } Here is my AJax call <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html><head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>My Title</title> <script language="JavaScript" > function getXMLObject() //XML OBJECT { var xmlHttp = false; xmlHttp = new XMLHttpRequest(); //For Mozilla, Opera Browsers return xmlHttp; // Mandatory Statement returning the ajax object created } var xmlhttp = new getXMLObject(); //xmlhttp holds the ajax object function ajaxFunction() { xmlhttp.open("GET","HelloServlet" ,true); xmlhttp.onreadystatechange = handleServerResponse; xmlhttp.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded'); xmlhttp.send(null); } function handleServerResponse() { if (xmlhttp.readyState == 4) { if(xmlhttp.status == 200) { document.forms[0].myDiv.value = xmlhttp.responseText; setTimeout(ajaxFunction(), 2000); } else { alert("Error during AJAX call. Please try again"); } } } function openPDF() { document.forms[0].method = "POST"; document.forms[0].action = "HelloServlet"; document.forms[0].submit(); } function stopAjax(){ clearInterval(intervalID); } </script> </head> <body><form name="myForm"> <table><tr><td> <INPUT TYPE="BUTTON" NAME="Download" VALUE="Download Queue ( PDF )" onclick="openPDF();"> </td></tr> <tr><td> Current status: <div id="myDiv"></div>% </td></tr></table> </form></body></html> But I dont know how to proceed further like how will the thread communicate the browser that the process has complete and how should the ajax call me made and check the status of the request. Please let me know if I am missing some pieces. Any suggestion if helpful.

    Read the article

  • Reordering Variadic Parameters

    - by void-pointer
    I have come across the need to reorder a variadic list of parameters that is supplied to the constructor of a struct. After being reordered based on their types, the parameters will be stored as a tuple. My question is how this can be done so that a modern C++ compiler (e.g. g++-4.7) will not generate unnecessary load or store instructions. That is, when the constructor is invoked with a list of parameters of variable size, it efficiently pushes each parameter into place based on an ordering over the parameters' types. Here is a concrete example. Assume that the base type of every parameter (without references, rvalue references, pointers, or qualifiers) is either char, int, or float. How can I make it so that all the parameters of base type char appear first, followed by all of those of base type int (which leaves the parameters of base type float last). The relative order in which the parameters were given should not be violated within sublists of homogeneous base type. Example: foo::foo() is called with arguments float a, char&& b, const float& c, int&& d, char e. The tuple tupe is std::tuple<char, char, int, float, float>, and it is constructed like so: tuple_type{std::move(b), e, std::move(d), a, c}. Consider the struct defined below, and assume that the metafunction deduce_reordered_tuple_type is already implemented. How would you write the constructor so that it works as intended? If you think that the code for deduce_reodered_tuple_type, would be useful to you, I can provide it; it's a little long. template <class... Args> struct foo { // Assume that the metafunction deduce_reordered_tuple_type is defined. typedef typename deduce_reordered_tuple_type<Args...>::type tuple_type; tuple_type t_; foo(Args&&... args) : t_{reorder_and_forward_parameters<Args>(args)...} {} }; Edit 1 The technique I describe above does have applications in mathematical frameworks that make heavy use of expression templates, variadic templates, and metaprogramming in order to perform aggressive inlining. Suppose that you wish to define an operator that takes the product of several expressions, each of which may be passed by reference, reference to const, or rvalue reference. (In my case, the expressions are conditional probability tables and the operation is the factor product, but something like matrix multiplication works suitably as well.) You need access to the data provided by each expression in order to evaluate the product. Consequently, you must move the expressions passed as rvalue references, copy the expressions passed by reference to const, and take the addresses of expressions passed by reference. Using the technique I describe above now poses several benefits. Other expressions can use uniform syntax to access data elements from this expression, since all of the heavy-lifting metaprogramming work is done beforehand, within the class. We can save stack space by grouping the pointers together and storing the larger expressions towards the end of the tuple. Implementing certain types of queries becomes much easier (e.g. check whether any of the pointers stored in the tuple aliases a given pointer). Thank you very much for your help!

    Read the article

  • Can someone help me compare using F# over C# in this specific example (IP Address expressions)?

    - by Phobis
    So, I am writing code to parse and IP Address expression and turn it into a regular expression that could be run against and IP Address string and return a boolean response. I wrote the code in C# (OO) and it was 110 lines of code. I am trying to compare the amount of code and the expressiveness of C# to F# (I am a C# programmer and a noob at F#). I don't want to post both the C# and F#, just because I don't want to clutter the post. If needed, I will do so. Anyway, I will give an example. Here is an expression: 192.168.0.250,244-248,108,51,7;127.0.0.1 I would like to take that and turn it into this regular expression: ((192.168.0.(250|244|245|246|247|248|108|51|7))|(127.0.0.1)) Here are some steps I am following: Operations: Break by ";" 192.168.0.250,244-248,108,51,7 127.0.0.1 Break by "." 192 168 0 250,244-248,108,51,7 Break by "," 250 244-248 108 51 7 Break by "-" 244 248 I came up with F# that produces the output. I am trying to forward-pipe through my operations listed above, as I think that would be more expressive. Can anyone make this code better? Teach me something :) open System let createItemArray (group:bool) (y:char) (items:string[]) = [| let indexes = items.Length - 1 let group = indexes > 0 && group if group then yield "(" for i in 0 .. indexes do yield items.[i].ToString() if i < indexes then yield y.ToString() if group then yield ")" |] let breakBy (group:bool) (x:string) (y:char): string[] = x.Split(y) |> createItemArray group y let breakItem (x:string) (y:char): string[] = breakBy false x y let breakGroup (x:string) (y:char): string[] = breakBy true x y let AddressExpression address:string = let builder = new System.Text.StringBuilder "(" breakGroup address ';' |> Array.collect (fun octet -> breakItem octet '.') |> Array.collect (fun options -> breakGroup options ',') |> Array.collect (fun (ranges : string) -> match (breakGroup ranges '-') with | x when x.Length > 3 -> match (Int32.TryParse(x.[1]), Int32.TryParse(x.[3])) with | ((true, a) ,(true, b)) -> [|a .. b|] |> Array.map (int >> string) |> createItemArray false '-' | _ -> [|ranges|] | _ -> [|ranges|] ) |> Array.iter (fun item -> match item with | ";" -> builder.Append ")|(" | "." -> builder.Append "\." | "," | "-" -> builder.Append "|" | _ -> builder.Append item |> ignore ) builder.Append(")").ToString() let address = "192.168.0.250,244-248,108,51,7;127.0.0.1" AddressExpression address

    Read the article

  • HTML 5 <video> tag vs Flash video. What are the pros and cons?

    - by Vilx-
    Seems like the new <video> tag is all the hype these days, especially since Firefox now supports it. News of this are popping up in blogs all over the place, and everyone seems to be excited. But what about? As much as I searched I could not find anything that would make it better than the good old Flash video. In fact, I see only problems with it: It will still be some time before all the browsers start supporting it, and much more time before most people upgrade; Flash is available already and everyone has it; You can couple Flash with whatever fancy UI you want for controlling the playback. I gather that the tag will be controllable as well (via JavaScript probably), but will it be able to go fullscreen? The only two pros for a <video> tag that I can see are: It is more "semantic" - which probably holds no importance to a whole lot of people, including me; It is not dependent on a single commercial 3rd party entity (Adobe) - which I also don't see as a compelling reason to switch, because free players and video converters are already available, and Adobe is not hindering the whole process in any way (it's not in their interests even). So... what's the big deal? Added: OK, so there is one more Pro... maybe. Support for mobile devices. Hard to say though. A number of thoughts race through my head about the subject: How many mobile devices are actually able to decode video at a decent speed anyway, Flash or otherwise? How long until mainstream mobile devices get the <video> support? Even if it is available through updates, how many people actually do that? How many people watch videos on web pages on their mobile phones at all? As for the semantics part - I understand that search engines might be able to detect videos better now, but... what will they do with them anyway? OK, so they know that there is a video in the page. And? They can't index a video! I'd like some more arguments here. Added: Just thought of another Cons. This opens up a whole new area of cross-browser incompatibility. HTML and CSS is quite messy already in this aspect. Flash at least is the same everywhere. But it's enough for at least one major browser vendor to decide against the <video> tag (can anyone say "Internet Explorer"?) and we have a nice new area of hell to explore. Added: A Pro just came in. More competition = more innovation. That's true. Giving Adobe more competition will probably force them to improve Flash in areas it has been lacking so far. Linux seems to be a weak spot for it, cited by many.

    Read the article

  • I never really understood: what is Application Binary Interface (ABI)?

    - by claws
    I never clearly understood what is an ABI. I'm sorry for such a lengthy question. I just want to clearly understand things. Please don't point me to wiki article, If could understand it, I wouldn't be here posting such a lengthy post. This is my mindset about different interfaces: TV remote is an interface between user and TV. It is an existing entity but useless (doesn't provide any functionality) by itself. All the functionality for each of those buttons on the remote is implemented in the Television set. Interface: It is a "existing entity" layer between the functionality and consumer of that functionality. An, interface by itself is doesn't do anything. It just invokes the functionality lying behind. Now depending on who the user is there are different type of interfaces. Command Line Interface(CLI) commands are the existing entities, consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: commands consumer: user Graphical User Interface(GUI) window,buttons etc.. are the existing entities, again consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: window,buttons etc.. consumer: user Application Programming Interface(API) functions or to be more correct, interfaces (in interfaced based programming) are the existing entities, consumer here is another program not a user. and again functionality lies behind this layer. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: functions, Interfaces(array of functions). consumer: another program/application. Application Binary Interface (ABI) Here is my problem starts. functionality: ??? existing entities: ??? consumer: ??? I've wrote few softwares in different languages and provided different kind of interfaces (CLI, GUI, API) but I'm not sure, if I ever, provided any ABI. http://en.wikipedia.org/wiki/Application_binary_interface says: ABIs cover details such as data type, size, and alignment; the calling convention, which controls how functions' arguments are passed and return values retrieved; the system call numbers and how an application should make system calls to the operating system; Other ABIs standardize details such as the C++ name mangling,[2] . exception propagation,[3] and calling convention between compilers on the same platform, but do not require cross-platform compatibility. Who needs these details? Please don't say, OS. I know assembly programming. I know how linking & loading works. I know what exactly happens inside. Where did C++ name mangling come in between? I thought we are talking at the binary level. Where did languages come in between? anyway, I've downloaded the [PDF] System V Application Binary Interface Edition 4.1 (1997-03-18) to see what exactly it contains. Well, most of it didn't make any sense. Why does it contain 2 chapters (4th & 5th) which describe the ELF file format.Infact, these are the only 2 significant chapters that specification. Rest of all the chapters "Processor Specific". Anyway, I thought that it is completely different topic. Please don't say that ELF file format specs are the ABI. It doesn't qualify to be Interface according to the definition. I know, since we are talking at such low level it must be very specific. But I'm not sure how is it "Instruction Set Architecture(ISA)" specific? Where can I find MS Window's ABI? So, these are the major queries that are bugging me.

    Read the article

  • How to define template directives (from an API perspective)?

    - by Ralph
    Preface I'm writing a template language (don't bother trying to talk me out of it), and in it, there are two kinds of user-extensible nodes. TemplateTags and TemplateDirectives. A TemplateTag closely relates to an HTML tag -- it might look something like div(class="green") { "content" } And it'll be rendered as <div class="green">content</div> i.e., it takes a bunch of attributes, plus some content, and spits out some HTML. TemplateDirectives are a little more complicated. They can be things like for loops, ifs, includes, and other such things. They look a lot like a TemplateTag, but they need to be processed differently. For example, @for($i in $items) { div(class="green") { $i } } Would loop over $items and output the content with the variable $i substituted in each time. So.... I'm trying to decide on a way to define these directives now. Template Tags The TemplateTags are pretty easy to write. They look something like this: [TemplateTag] static string div(string content = null, object attrs = null) { return HtmlTag("div", content, attrs); } Where content gets the stuff between the curly braces (pre-rendered if there are variables in it and such), and attrs is either a Dictionary<string,object> of attributes, or an anonymous type used like a dictionary. It just returns the HTML which gets plunked into its place. Simple! You can write tags in basically 1 line. Template Directives The way I've defined them now looks like this: [TemplateDirective] static string @for(string @params, string content) { var tokens = Regex.Split(@params, @"\sin\s").Select(s => s.Trim()).ToArray(); string itemName = tokens[0].Substring(1); string enumName = tokens[1].Substring(1); var enumerable = data[enumName] as IEnumerable; var sb = new StringBuilder(); var template = new Template(content); foreach (var item in enumerable) { var templateVars = new Dictionary<string, object>(data) { { itemName, item } }; sb.Append(template.Render(templateVars)); } return sb.ToString(); } (Working example). Basically, the stuff between the ( and ) is not split into arguments automatically (like the template tags do), and the content isn't pre-rendered either. The reason it isn't pre-rendered is because you might want to add or remove some template variables or something first. In this case, we add the $i variable to the template variables, var templateVars = new Dictionary<string, object>(data) { { itemName, item } }; And then render the content manually, sb.Append(template.Render(templateVars)); Question I'm wondering if this is the best approach to defining custom Template Directives. I want to make it as easy as possible. What if the user doesn't know how to render templates, or doesn't know that he's supposed to? Maybe I should pass in a Template instance pre-filled with the content instead? Or maybe only let him tamper w/ the template variables, and then automatically render the content at the end? OTOH, for things like "if" if the condition fails, then the template wouldn't need to be rendered at all. So there's a lot of flexibility I need to allow in here. Thoughts?

    Read the article

  • Having different database sorting order (default_scope) for two different views

    - by Juniper747
    In my model (pins.rb), I have two sorting orders: default_scope order: 'pins.featured DESC' #for adding featured posts to the top of a list default_scope order: 'pins.created_at DESC' #for adding the remaining posts beneath the featured posts This sorting order (above) is how I want my 'pins view' (index.html.erb) to look. Which is just a list of ALL user posts. In my 'users view' (show.html.erb) I am using the same model (pins.rb) to list only current_user pins. HOWEVER, I want to sorting order to ignore the "featured" default scope and only use the second scope: default_scope order: 'pins.created_at DESC' How can I accomplish this? I tried doing something like this: default_scope order: 'pins.featured DESC', only: :index default_scope order: 'pins.created_at DESC' But that didn't fly... UPDATE I updated my model to define a scope: scope :featy, order: 'pins.featured DESC' default_scope order: 'pins.created_at DESC' And updated my pins view to: <%= render @pins.featy %> However, now when I open my pins view, I get the error: undefined method `featy' for #<Array:0x00000100ddbc78> UPDATE 2 User.rb class User < ActiveRecord::Base attr_accessible :name, :email, :username, :password, :password_confirmation, :avatar, :password_reset_token, :password_reset_sent_at has_secure_password has_many :pins, dependent: :destroy #destroys user posts when user is destroyed # has_many :featured_pins, order: 'featured DESC', class_name: "Pin", source: :pin has_attached_file :avatar, :styles => { :medium => "300x300#", :thumb => "120x120#" } before_save { |user| user.email = user.email.downcase } before_save { |user| user.username = user.username.downcase } before_save :create_remember_token before_save :capitalize_name validates :name, presence: true, length: { maximum: 50 } VALID_EMAIL_REGEX = /\A[\w+\-.]+@[a-z\d\-.]+\.[a-z]+\z/i VALID_USERNAME_REGEX = /^[A-Za-z0-9]+(?:[_][A-Za-z0-9]+)*$/ validates :email, presence: true, format: { with: VALID_EMAIL_REGEX }, uniqueness: { case_sensitive: false } validates :username, presence: true, format: { with: VALID_USERNAME_REGEX }, uniqueness: { case_sensitive: false } validates :password, length: { minimum: 6 }, on: :create #on create, because was causing erros on pw_reset Pin.rb class Pin < ActiveRecord::Base attr_accessible :content, :title, :privacy, :date, :dark, :bright, :fragmented, :hashtag, :emotion, :user_id, :imagesource, :imageowner, :featured belongs_to :user before_save :capitalize_title before_validation :generate_slug validates :content, presence: true, length: { maximum: 8000 } validates :title, presence: true, length: { maximum: 24 } validates :imagesource, presence: { message: "Please search and choose an image" }, length: { maximum: 255 } validates_inclusion_of :privacy, :in => [true, false] validates :slug, uniqueness: true, presence: true, exclusion: {in: %w[signup signin signout home info privacy]} # for sorting featured and newest posts first default_scope order: 'pins.created_at DESC' scope :featured_order, order: 'pins.featured DESC' def to_param slug # or "#{id}-#{name}".parameterize end def generate_slug # makes the url slug address bar freindly self.slug ||= loop do random_token = Digest::MD5.hexdigest(Time.zone.now.to_s + title)[0..9]+"-"+"#{title}".parameterize break random_token unless Pin.where(slug: random_token).exists? end end protected def capitalize_title self.title = title.split.map(&:capitalize).join(' ') end end users_controller.rb class UsersController < ApplicationController before_filter :signed_in_user, only: [:edit, :update, :show] before_filter :correct_user, only: [:edit, :update, :show] before_filter :admin_user, only: :destroy def index if !current_user.admin? redirect_to root_path end end def menu @user = current_user end def show @user = User.find(params[:id]) @pins = @user.pins current_user.touch(:last_log_in) #sets the last log in time if [email protected]? render 'pages/info/' end end def new @user = User.new end pins_controller.rb class PinsController < ApplicationController before_filter :signed_in_user, except: [:show] # GET /pins, GET /pins.json def index #Live Feed @pins = Pin.all @featured_pins = Pin.featured_order respond_to do |format| format.html # index.html.erb format.json { render json: @pins } end end # GET /pins, GET /pins.json def show #single Pin View @pin = Pin.find_by_slug!(params[:id]) require 'uri' #this gets the photo's id from the stored uri @image_id = URI(@pin.imagesource).path.split('/').second if @pin.privacy == true #check for private pins if signed_in? if @pin.user_id == current_user.id respond_to do |format| format.html # show.html.erb format.json { render json: @pin } end else redirect_to home_path, notice: "Prohibited 1" end else redirect_to home_path, notice: "Prohibited 2" end else respond_to do |format| format.html # show.html.erb format.json { render json: @pin } end end end # GET /pins, GET /pins.json def new @pin = current_user.pins.new respond_to do |format| format.html # new.html.erb format.json { render json: @pin } end end # GET /pins/1/edit def edit @pin = current_user.pins.find_by_slug!(params[:id]) end Finally, on my index.html.erb I have: <%= render @featured_pins %>

    Read the article

  • Insert a transformed integer_sequence into a variadic template argument?

    - by coderforlife
    How do you insert a transformed integer_sequence (or similar since I am targeting C++11) into a variadic template argument? For example I have a class that represents a set of bit-wise flags (shown below). It is made using a nested-class because you cannot have two variadic template arguments for the same class. It would be used like typedef Flags<unsigned char, FLAG_A, FLAG_B, FLAG_C>::WithValues<0x01, 0x02, 0x04> MyFlags. Typically, they will be used with the values that are powers of two (although not always, in some cases certain combinations would be made, for example one could imagine a set of flags like Read=0x1, Write=0x2, and ReadWrite=0x3=0x1|0x2). I would like to provide a way to do typedef Flags<unsigned char, FLAG_A, FLAG_B, FLAG_C>::WithDefaultValues MyFlags. template<class _B, template <class,class,_B> class... _Fs> class Flags { public: template<_B... _Vs> class WithValues : public _Fs<_B, Flags<_B,_Fs...>::WithValues<_Vs...>, _Vs>... { // ... }; }; I have tried the following without success (placed inside the Flags class, outside the WithValues class): private: struct _F { // dummy class which can be given to a flag-name template template <_B _V> inline constexpr explicit _F(std::integral_constant<_B, _V>) { } }; // we count the flags, but only in a dummy way static constexpr unsigned _count = sizeof...(_Fs<_B, _F, 1>); static inline constexpr _B pow2(unsigned exp, _B base = 2, _B result = 1) { return exp < 1 ? result : pow2(exp/2, base*base, (exp % 2) ? result*base : result); } template <_B... _Is> struct indices { using next = indices<_Is..., sizeof...(_Is)>; using WithPow2Values = WithValues<pow2(_Is)...>; }; template <unsigned N> struct build_indices { using type = typename build_indices<N-1>::type::next; }; template <> struct build_indices<0> { using type = indices<>; }; //// Another attempt //template < _B... _Is> struct indices { // using WithPow2Values = WithValues<pow2(_Is)...>; //}; //template <unsigned N, _B... _Is> struct build_indices // : build_indices<N-1, N-1, _Is...> { }; //template < _B... _Is> struct build_indices<0, _Is...> // : indices<_Is...> { }; public: using WithDefaultValues = typename build_indices<_count>::type::WithPow2Values; Of course, I would be willing to have any other alternatives to the whole situation (supporting both flag names and values in the same template set, etc). I have included a "working" example at ideone: http://ideone.com/NYtUrg - by "working" I mean compiles fine without using default values but fails with default values (there is a #define to switch between them). Thanks!

    Read the article

  • Keeping video viewing statistics breakdown by video time in a database

    - by Septagram
    I need to keep a number of statistics about the videos being watched, and one of them is what parts of the video are being watched most. The design I came up with is to split the video into 256 intervals and keep the floating-point number of views for each of them. I receive the data as a number of intervals the user watched continuously. The problem is how to store them. There are two solutions I see. Row per every video segment Let's have a database table like this: CREATE TABLE `video_heatmap` ( `id` int(11) NOT NULL AUTO_INCREMENT, `video_id` int(11) NOT NULL, `position` tinyint(3) unsigned NOT NULL, `views` float NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `idx_lookup` (`video_id`,`position`) ) ENGINE=MyISAM Then, whenever we have to process a number of views, make sure there are the respective database rows and add appropriate values to the views column. I found out it's a lot faster if the existence of rows is taken care of first (SELECT COUNT(*) of rows for a given video and INSERT IGNORE if they are lacking), and then a number of update queries is used like this: UPDATE video_heatmap SET views = views + ? WHERE video_id = ? AND position >= ? AND position < ? This seems, however, a little bloated. The other solution I came up with is Row per video, update in transactions A table will look (sort of) like this: CREATE TABLE video ( id INT NOT NULL AUTO_INCREMENT, heatmap BINARY (4 * 256) NOT NULL, ... ) ENGINE=InnoDB Then, upon every time a view needs to be stored, it will be done in a transaction with consistent snapshot, in a sequence like this: If the video doesn't exist in the database, it is created. A row is retrieved, heatmap, an array of floats stored in the binary form, is converted into a form more friendly for processing (in PHP). Values in the array are increased appropriately and the array is converted back. Row is changed via UPDATE query. So far the advantages can be summed up like this: First approach Stores data as floats, not as some magical binary array. Doesn't require transaction support, so doesn't require InnoDB, and we're using MyISAM for everything at the moment, so there won't be any need to mix storage engines. (only applies in my specific situation) Doesn't require a transaction WITH CONSISTENT SNAPSHOT. I don't know what are the performance penalties of those. I already implemented it and it works. (only applies in my specific situation) Second approach Is using a lot less storage space (the first approach is storing video ID 256 times and stores position for every segment of the video, not to mention primary key). Should scale better, because of InnoDB's per-row locking as opposed to MyISAM's table locking. Might generally work faster because there are a lot less requests being made. Easier to implement in code (although the other one is already implemented). So, what should I do? If it wasn't for the rest of our system using MyISAM consistently, I'd go with the second approach, but currently I'm leaning to the first one. But maybe there are some reasons to favour one approach or another?

    Read the article

  • "Too much recursion" error when loading the same page with a hash

    - by Elliott
    Hi, I have a site w/ an image gallery ("Portfolio") page. There is drop-down navigation that allows a user to view a specific image in the portfolio from any page on the site. The links in the navigation use a hash, and that hash is read and converted into a string of an image filename. The image src attribute on the /portfolio/ page is then swapped out with the new image filename. This works fine if I'm clicking the dropdown link from a page OTHER THAN the /portfolio/ page itself. However if I take the same action from the /portfolio/ page, I get a "too much recursion" error in Firefox. Here's the code: Snippet of the nav markup: <li>Portfolio Category A <ul> <li><a href="/portfolio/#dining-room-table">Dining Room Table</a></li> <li><a href="/portfolio/#bathroom-mirror">Bathroom Mirror</a></li> </ul> </li> JS that reads the hash, converts it to an image filename, and swaps out the image on the page: $(document).ready(function() { if(location.pathname.indexOf("/portfolio/") > -1) { var hash = location.hash; var new_image = hash.replace("#", "")+".jpg"; swapImage(new_image); } }); function swapImage(new_image) { setTimeout(function() { $("img#current-image").attr("src", "/images/portfolio/work/"+new_image); }, 100); } I'm using the setTimeout function because I'm fading out the old image before making the swap, then fading it back in. I initially thought this was the function that was causing the recursion error, but when I remove the setTimeout I still have this problem. Does this have to do with a closure I'm not aware of? I'm pretty green on closures. JS that listens for the click on the nav: $("nav.main li.dropdown li ul li").click(function() { $(this).find("a").click(); $("nav.main").find("ul ul").hide(); $("nav.main li.hover").removeClass("hover"); }); I haven't implemented the fade in/out functionality for the dropdown nav yet, but I have implemented it for Next and Previous arrows, which can also be used to swap out images using the same swapImage function. Here's that code: $("#scroll-arrows a").click(function() { $("#current-image").animate({ opacity: 0 }, 100); var current_image = $("#current-image").attr("src").split("/").pop(); var new_image; var positions = getPositions(current_image); if($(this).is(".right")) { new_image = positions.next_img; } else { new_image = positions.prev_img; } swapImage(new_image); $("#current-image").animate({ opacity: 1 }, 100); return false; }); Here's the error I'm getting in Firefox: too much recursion var ret = handleObj.handler.apply( this, arguments ); jquery.js (line 1936) Thanks for any advice.

    Read the article

  • Errors with parameter datatype in PostgreSql query

    - by John
    Im trying to execute a query to postgresql using the following code. It's written in C/C++ and I keep getting the following error when declaring a cursor: DECLARE CURSOR failed: ERROR: could not determine data type of parameter $1 Searching on here and on google, I can't find a solution. Can anyone find where I have made and error and why this is happening? thanks! void searchdb( PGconn *conn, char* name, char* offset ) { // Will hold the number of field in table int nFields; // Start a transaction block PGresult *res = PQexec(conn, "BEGIN"); if (PQresultStatus(res) != PGRES_COMMAND_OK) { printf("BEGIN command failed: %s", PQerrorMessage(conn)); PQclear(res); exit_nicely(conn); } // Clear result PQclear(res); printf("BEGIN command - OK\n"); //set the values to use const char *values[3] = {(char*)name, (char*)RESULTS_LIMIT, (char*)offset}; //calculate the lengths of each of the values int lengths[3] = {strlen((char*)name), sizeof(RESULTS_LIMIT), sizeof(offset)}; //state which parameters are binary int binary[3] = {0, 0, 1}; res = PQexecParams(conn, "DECLARE emprec CURSOR for SELECT name, id, 'Events' as source FROM events_basic WHERE name LIKE '$1::varchar%' UNION ALL " " SELECT name, fsq_id, 'Venues' as source FROM venues_cache WHERE name LIKE '$1::varchar%' UNION ALL " " SELECT name, geo_id, 'Cities' as source FROM static_cities WHERE name LIKE '$1::varchar%' OR FIND_IN_SET('$1::varchar%', alternate_names) != 0 LIMIT $2::int4 OFFSET $3::int4", 3, //number of parameters NULL, //ignore the Oid field values, //values to substitute $1 and $2 lengths, //the lengths, in bytes, of each of the parameter values binary, //whether the values are binary or not 0); //we want the result in text format // Fetch rows from table if (PQresultStatus(res) != PGRES_COMMAND_OK) { printf("DECLARE CURSOR failed: %s", PQerrorMessage(conn)); PQclear(res); exit_nicely(conn); } // Clear result PQclear(res); res = PQexec(conn, "FETCH ALL in emprec"); if (PQresultStatus(res) != PGRES_TUPLES_OK) { printf("FETCH ALL failed"); PQclear(res); exit_nicely(conn); } // Get the field name nFields = PQnfields(res); // Prepare the header with table field name printf("\nFetch record:"); printf("\n********************************************************************\n"); for (int i = 0; i < nFields; i++) printf("%-30s", PQfname(res, i)); printf("\n********************************************************************\n"); // Next, print out the record for each row for (int i = 0; i < PQntuples(res); i++) { for (int j = 0; j < nFields; j++) printf("%-30s", PQgetvalue(res, i, j)); printf("\n"); } PQclear(res); // Close the emprec res = PQexec(conn, "CLOSE emprec"); PQclear(res); // End the transaction res = PQexec(conn, "END"); // Clear result PQclear(res); }

    Read the article

  • Write a program for a report derived from the data in the data file JEWELRY. The data is to be input

    - by Taylor
    here is the JEWELRY file 0011 Money_Clip 2.000 50.00 Other 0035 Paperweight 1.625 175.00 Other 0457 Cuff_Bracelet 2.375 150.00 Bracelet 0465 Links_Bracelet 7.125 425.00 Bracelet 0585 Key_Chain 1.325 50.00 Other 0595 Cuff_Links 0.625 525.00 Other 0935 Royale_Pendant 0.625 975.00 Pendant 1092 Bordeaux_Cross 1.625 425.00 Cross 1105 Victory_Medallion 0.875 30.00 Pendant 1111 Marquis_Cross 1.375 70.00 Cross 1160 Christina_Ring 0.500 175.00 Ring 1511 French_Clips 0.687 375.00 Other 1717 Pebble_Pendant 1.250 45.00 Pendant 1725 Folded_Pendant 1.250 45.00 Pendant 1730 Curio_Pendant 1.063 275.00 Pendant this is the program i have used #include <iostream> #include <string> #include <iomanip> #include <fstream> using namespace std; struct productJewelry { string name; double amount; int itemCode; double size; string group; }; int main() { // declare variables ifstream inFile; int count=0; int x=0; productJewelry product[50]; inFile.open("jewelry.txt"); // file must be in same folder if (inFile.fail()) cout << "failed"; cout << fixed << showpoint; // fixed format, two decimal places cout << setprecision(2); while (inFile.peek() != EOF) { // cout << count << " : "; count++; inFile>> product[x].itemCode; inFile>> product[x].name; inFile>> product[x].size; inFile>> product[x].amount; inFile>> product[x].group; // cout << product[x].itemCode << ", " << product[x].name << ", "<< product[x].size << ", " << product[x].amount << endl; x++; if (inFile.peek() == '\n') inFile.ignore(1, '\n'); } inFile.close(); string temp; bool swap; do { swap = false; for (int x=0; x<count;x++) { if (product[x].name>product[x+1].name) { //these 3 lines are to swap elements in array temp=product[x].name; product[x].name=product[x+1].name; product[x+1].name=temp; swap=true; } } } while (swap); for (x=0; x< count; x++) { //cout<< product[x].itemCode<<" "; //cout<< product[x].name <<" "; //cout<< product[x].size <<" "; //cout<< product[x].amount<<" "; //cout<< product[x].group<<" "<<endl; } system("pause"); // to freeze Dev-c++ output screen return 0; } // end main

    Read the article

  • why the global variable value is not being set?

    - by masato-san
    I can't figure out why my global variable workRegionCode is not set properly. In function getWorkRegion(), after getting ajax callback, it attempt to set workRegionCode global variable. (inside of function setFirstIndexWorkRegionCode() ). The alert in setFirstIndexWorkRegionCode() outputs expected value like 401 or 123 etc. But then when getMachines() is called, the global variable workRegionCode is undefiend :( This js starts from window.onload() (please ignore those Japanese JSON key value and few Japanese variables. They are harmless) Code: var workRegionDropdown = document.getElementById("workRegionDropdown");; var machineDropdown = document.getElementById("machineDropdown"); //this is the global variable with problem..... var workRegionCode; //INIT window.onload = function() { getWorkRegions(); // alert("before: " + window.workRegionCode); getMachines(); // alert("after: " + window.workRegionCode); } function addWorkRegionToDropdown(jsonObject, dropdown) { for(var i=0, j=jsonObject.length; i<j; i++) { var option = document.createElement("option"); option.text = jsonObject[i].?????? + ":" + jsonObject[i].??????; option.value = jsonObject[i].??????; dropdown.options.add(option); } } function addMachineToDropdown(jsonObject, dropdown) { for(var i=0, j=jsonObject.length; i<j; i++) { var option = document.createElement("option"); option.text = jsonObject[i].???; option.value = jsonObject[i].???; dropdown.options.add(option); } } function getMachines() { //!!!!!!!!!!! workRegionCode is undefined.. why?!?!?! alert("inside of getMachines() ==> " + window.workRegionCode); var ajaxRequest = new XMLHttpRequest(); ajaxTimeout = setTimeout(function() {timeoutAjax(ajaxRequest, "showMessage")}, "5000"); ajaxRequest.onreadystatechange = function() { if(ajaxRequest.readyState == 4 ) { clearTimeout(ajaxTimeout); switch ( ajaxRequest.status ) { case 200: var jsonOut = JSON.parse(ajaxRequest.responseText); addMachineToDropdown(jsonOut.??, machineDropdown); break; default: document.getElementById("showMessage").innerHTML = ajaxRequest.responseText; } } } var aMonthAgo = new Date(); with(aMonthAgo) { setMonth(getMonth() - 1) } aMonthAgo = aMonthAgo.getYYYYMMDD(); var ??? = "29991231"; var url = "../resources/machine/list/" + window.workRegionCode + "/" + aMonthAgo + "/" + ???; ajaxRequest.open("GET", url, true); ajaxRequest.send(null) } function getWorkRegions() { var ajaxRequest = new XMLHttpRequest(); ajaxTimeout = setTimeout(function() {timeoutAjax(ajaxRequest, "showMessage")}, "5000"); ajaxRequest.onreadystatechange = function() { if(ajaxRequest.readyState == 4 ) { clearTimeout(ajaxTimeout); switch ( ajaxRequest.status ) { case 200: var jsonOut = JSON.parse(ajaxRequest.responseText); //set global variable workRegionCode setFirstIndexWorkRegionCode(jsonOut); addWorkRegionToDropdown(jsonOut.???, workRegionDropdown); default: document.getElementById("showMessage").innerHTML = ajaxRequest.responseText; } } } var url = "../resources/workshop/list"; ajaxRequest.open("GET", url, true); ajaxRequest.send(null) }//end getWorkRegions() function setFirstIndexWorkRegionCode(jsonString) { //here I set the value to work region code! window.workRegionCode = jsonString.???[0].??????; alert("??????: " + window.workRegionCode); }

    Read the article

  • To copy data from a webpage into an array of structs and sorted by“name” before producing the data.

    - by Taylor
    include include include include using namespace std; struct productJewelry { string name; double amount; int itemCode; double size; string group; }; int main() { // declare variables ifstream inFile; int count=0; int x=0; productJewelry product[50]; inFile.open("jewelry.txt"); // file must be in same folder if (inFile.fail()) cout << "failed"; cout << fixed << showpoint; // fixed format, two decimal places cout << setprecision(2); while (inFile.peek() != EOF) { // cout << count << " : "; count++; inFile product[x].itemCode; inFile product[x].name; inFile product[x].size; inFile product[x].amount; inFile product[x].group; // cout << product[x].itemCode << ", " << product[x].name << ", "<< product[x].size << ", " << product[x].amount << endl; x++; if (inFile.peek() == '\n') inFile.ignore(1, '\n'); } inFile.close(); string temp; bool swap; do { swap = false; for (int x=0; xproduct[x+1].name) { //these 3 lines are to swap elements in array temp=product[x].name; product[x].name=product[x+1].name; product[x+1].name=temp; swap=true; } } } while (swap); for (x=0; x< count; x++) { //cout<< product[x].itemCode<<" "; //cout<< product[x].name <<" "; //cout<< product[x].size <<" "; //cout<< product[x].amount<<" "; //cout<< product[x].group<<" "<<endl; } system("pause"); // to freeze Dev-c++ output screen return 0; } // end main THE FILE THAT NEEDS TO PRINT AND BE SORTED IN ALPHABETICAL ORDER 0011 Money_Clip 2.000 50.00 Other 0035 Paperweight 1.625 175.00 Other 0457 Cuff_Bracelet 2.375 150.00 Bracelet 0465 Links_Bracelet 7.125 425.00 Bracelet 0585 Key_Chain 1.325 50.00 Other 0595 Cuff_Links 0.625 525.00 Other 0935 Royale_Pendant 0.625 975.00 Pendant 1092 Bordeaux_Cross 1.625 425.00 Cross 1105 Victory_Medallion 0.875 30.00 Pendant 1111 Marquis_Cross 1.375 70.00 Cross 1160 Christina_Ring 0.500 175.00 Ring 1511 French_Clips 0.687 375.00 Other 1717 Pebble_Pendant 1.250 45.00 Pendant 1725 Folded_Pendant 1.250 45.00 Pendant 1730 Curio_Pendant 1.063 275.00 Pendant

    Read the article

  • To copy data from a webpage into an array of structs and sorted by"name" before producing the data.

    - by Taylor
    include include include include using namespace std; struct productJewelry { string name; double amount; int itemCode; double size; string group; }; int main() { // declare variables ifstream inFile; int count=0; int x=0; productJewelry product[50]; inFile.open("jewelry.txt"); // file must be in same folder if (inFile.fail()) cout << "failed"; cout << fixed << showpoint; // fixed format, two decimal places cout << setprecision(2); while (inFile.peek() != EOF) { // cout << count << " : "; count++; inFile product[x].itemCode; inFile product[x].name; inFile product[x].size; inFile product[x].amount; inFile product[x].group; // cout << product[x].itemCode << ", " << product[x].name << ", "<< product[x].size << ", " << product[x].amount << endl; x++; if (inFile.peek() == '\n') inFile.ignore(1, '\n'); } inFile.close(); string temp; bool swap; do { swap = false; for (int x=0; xproduct[x+1].name) { //these 3 lines are to swap elements in array temp=product[x].name; product[x].name=product[x+1].name; product[x+1].name=temp; swap=true; } } } while (swap); for (x=0; x< count; x++) { //cout<< product[x].itemCode<<" "; //cout<< product[x].name <<" "; //cout<< product[x].size <<" "; //cout<< product[x].amount<<" "; //cout<< product[x].group<<" "<<endl; } system("pause"); // to freeze Dev-c++ output screen return 0; } // end main

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Mauritius Software Craftsmanship Community

    There we go! I finally managed to push myself forward and pick up an old, actually too old, idea since I ever arrived here in Mauritius more than six years ago. I'm talking about a community for all kind of ICT connected people. In the past (back in Germany), I used to be involved in various community activities. For example, I was part of the Microsoft Community Leader/Influencer Program (CLIP) in Germany due to an FAQ on Visual FoxPro, actually Active FoxPro Pages (AFP) to be more precise. Then in 2003/2004 I addressed the responsible person of the dFPUG user group in Speyer in order to assist him in organising monthly user group meetings. Well, he handed over management completely, and attended our meetings regularly. Why did it take you so long? Well, I don't want to bother you with the details but short version is that I was too busy on either job (building up new companies) or private life (got married and we have two lovely children, eh 'monsters') or even both. But now is the time where I was starting to look for new fields given the fact that I gained some spare time. My businesses are up and running, the kids are in school, and I am finally in a position where I can commit myself again to community activities. And I love to do that! Why a new user group? Good question... And 'easy' to answer. Since back in 2007 I did my usual research, eh Google searches, to see whether there existing user groups in Mauritius and in which field of interest. And yes, there are! If I recall this correctly, then there are communities for PHP, Drupal, Python (just recently), Oracle, and Linux (which used to be even two). But... either they do not exist anymore, they are dormant, or there is only a low heart-beat, frankly speaking. And yes, I went to meetings of the Linux User Group Meta (Mauritius) back in 2010/2011 and just recently. I really like the setup and the way the LUGM is organised. It's just that I have a slightly different point of view on how a user group or community should organise itself and how to approach future members. Don't get me wrong, I'm not criticizing others doing a very good job, I'm only saying that I'd like to do it differently. The last meeting of the LUGM was awesome; read my feedback about it. Ok, so what's up with 'Mauritius Software Craftsmanship Community' or short: MSCC? As I've already written in my article on 'Communities - The importance of exchange and discussion' I think it is essential in a world of IT to stay 'connected' with a good number of other people in the same field. There is so much dynamic and every day's news that it is almost impossible to keep on track with all of them. The MSCC is going to provide a common platform to exchange experience and share knowledge between each other. You might be a newbie and want to know what to expect working as a software developer, or as a database administrator, or maybe as an IT systems administrator, or you're an experienced geek that loves to share your ideas or solutions that you implemented to solve a specific problem, or you're the business (or HR) guy that is looking for 'fresh' blood to enforce your existing team. Or... you're just interested and you'd like to communicate with like-minded people. Meetup of 26.06.2013 @ L'arabica: Of course there are laptops around. Free WiFi, power outlet, coffee, code and Linux in one go. The MSCC is technology-agnostic and spans an umbrella over any kind of technology. Simply because you can't ignore other technologies anymore in a connected IT world as we have. A front-end developer for iOS applications should have the chance to connect with a Python back-end coder and eventually with a DBA for MySQL or PostgreSQL and exchange their experience. Furthermore, I'm a huge fan of cross-platform development, and it is very pleasant to have pure Web developers - with all that HTML5, CSS3, JavaScript and JS libraries stuff - and passionate C# or Java coders at the same table. This diversity of knowledge can assist and boost your personal situation. And last but not least, there are projects and open positions 'flying' around... People might like to hear others opinion about an employer or get new impulses on how to tackle down an issue at their workspace, etc. This is about community. And that's how I see the MSCC in general - free of any limitations be it by programming language or technology. Having the chance to exchange experience and to discuss certain aspects of technology saves you time and money, and it's a pleasure to enjoy. Compared to dusty books and remote online resources. It's human! Organising meetups (meetings, get-together, gatherings - you name it!) As of writing this article, the MSCC is currently meeting every Wednesday for the weekly 'Code & Coffee' session at various locations (suggestions are welcome!) in Mauritius. This might change in the future eventually but especially at the beginning I think it is very important to create awareness in the Mauritian IT world. Yes, we are here! Come and join us! ;-) The MSCC's main online presence is located at Meetup.com because it allows me to handle the organisation of events and meeting appointments very easily, and any member can have a look who else is involved so that an exchange of contacts is given at any time. In combination with the other entities (G+ Communities, FB Pages or in Groups) I advertise and manage all future activities here: Mauritius Software Craftsmanship Community This is a community for those who care and are proud of what they do. For those developers, regardless how experienced they are, who want to improve and master their craft. This is a community for those who believe that being average is just not good enough. I know, there are not many 'craftsmen' yet but it's a start... Let's see how it looks like by the end of the year. There are free smartphone apps for Android and iOS from Meetup.com that allow you to keep track of meetings and to stay informed on latest updates. And last but not least, there is a Trello workspace to collect and share ideas and provide downloads of slides, etc. Trello is also available as free smartphone app. Sharing is caring! As mentioned, the #MSCC is present in various social media networks in order to cover as many people as possible here in Mauritius. Following is an overview of the current networks: Twitter - Latest updates and quickies Google+ - Community channel Facebook - Community Page LinkedIn - Community Group Trello - Collaboration workspace to share and develop ideas Hopefully, this covers the majority of computer-related people in Mauritius. Please spread the word about the #MSCC between your colleagues, your friends and other interested 'geeks'. Your future looks bright Running and participating in a user group or any kind of community usually provides quite a number of advantages for anyone. On the one side it is very joyful for me to organise appointments and get in touch with people that might be interested to present a little demo of their projects or their recent problems they had to tackle down, and on the other side there are lots of companies that have various support programs or sponsorships especially tailored for user groups. At the moment, I already have a couple of gimmicks that I would like to hand out in small contests or raffles during one of the upcoming meetings, and as said, companies provide all kind of goodies, books free of charge, or sometimes even licenses for communities. Meeting other software developers or IT guys also opens up your point of view on the local market and there might be interesting projects or job offers available, too. A community like the Mauritius Software Craftsmanship Community is great for freelancers, self-employed, students and of course employees. Meetings will be organised on a regular basis, and I'm open to all kind of suggestions from you. Please leave a comment here in blog or join the conversations in the above mentioned social networks. Let's get this community up and running, my fellow Mauritians! Recent updates The MSCC is now officially participating in the O'Reilly UK User Group programm and we are allowed to request review or recension copies of recent titles. Additionally, we have a discount code for any books or ebooks that you might like to order on shop.oreilly.com. More applications for user group sponsorship programms are pending and I'm looking forward to a couple of announcement very soon. And... we need some kind of 'corporate identity' - Over at the MSCC website there is a call for action (or better said a contest with prizes) to create a unique design for the MSCC. This would include a decent colour palette, a logo, graphical banners for Meetup, Google+, Facebook, LinkedIn, etc. and of course badges for our craftsmen to add to their personal blogs and websites. Please spread the word and contribute. Thanks!

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >