Search Results

Search found 14288 results on 572 pages for 'channel management'.

Page 154/572 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • How do I correctly SSH port forward using LiveReload on Redhat?

    - by program247365
    Referencing this page: http://feedback.livereload.com/knowledgebase/articles/86280-if-you-edit-files-directly-on-your-server It says you can remotely port forward the LiveReload specific port of 35729, using this command: ssh -L 35729:127.0.0.1:35729 mylogin@myremoteserverIP When I run the -v option, I get: debug1: Local connections to LOCALHOST:35729 forwarded to remote address 127.0.0.1:35729 debug1: Local forwarding listening on ::1 port 35729. debug1: channel 0: new [port listener] debug1: Local forwarding listening on 127.0.0.1 port 35729. debug1: channel 1: new [port listener] debug1: channel 2: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: client_input_channel_req: channel 2 rtype [email protected] reply 1 debug1: Connection to port 35729 forwarding to 127.0.0.1 port 35729 requested. debug1: channel 3: new [direct-tcpip] channel 3: open failed: connect failed: Connection refused debug1: channel 3: free: direct-tcpip: listening port 35729 for 127.0.0.1 port 35729, connect from 127.0.0.1 port 63673, nchannels 4 I thought editing my /etc/services with this line, would work, but it doesn't: livereload 35729/tcp # livereload usage with guard-livereload Every time I attempt to connect with the browser extension, I believe It's getting blocked by my server. What am I missing here? Do I need to edit /etc/services for this to work?

    Read the article

  • Redirecting or routing all traffic to OpenVPN on a Mac OS X client

    - by sdr56p
    I have configured an OpenVPN (2.2.1) server on an Ubuntu virtual machine in the Amazon elastic compute cloud. The server is up and running. I have installed OpenVPN (2.2.1) on a Mac OS X (10.8.2) client and I am using the openvpn2 binary to connect (in opposition to other clients like Tunnelblick or Viscosity). I can connect with the client and successfully ping or ssh the server through the tunnel. However, I can't redirect all internet traffic through the VPN even if I use the push "redirect-gateway def1 bypass-dhcp" option in the server.conf configurations. When I connect to the server with these configurations, I get a successful connection, but then an infinite series of error messages: "write UDPv4: No route to host (code=65)". Traffic routing seems to be compromised because I am not able to access anything anymore, not even the OpenVPN server (by pinging 10.8.0.1 for instance). This is beyond me. I am finding little help on the web and don't know what to try next. I don't think it is a problem of forwarding the traffic on the server since, first, I have also took care of that and, second, I can't even ping the VPN server locally through the tunnel (or ping anything at all for that matter). Thank you for your help. Here is the server.conf. file: port 1194 proto udp dev tun ca ca.crt cert ec2-server.crt key ec2-server.key # This file should be kept secret dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 And the client.conf file: client dev tun proto udp remote servername.com 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert Toto5.crt key Toto5.key ns-cert-type server comp-lzo verb 3 Here is the connection log with the error messages: $ sudo openvpn2 --config client.conf Wed Mar 13 22:58:22 2013 OpenVPN 2.2.1 x86_64-apple-darwin12.2.0 [SSL] [LZO2] [eurephia] built on Mar 4 2013 Wed Mar 13 22:58:22 2013 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Wed Mar 13 22:58:22 2013 LZO compression initialized Wed Mar 13 22:58:22 2013 Control Channel MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Wed Mar 13 22:58:22 2013 Socket Buffers: R=[196724->65536] S=[9216->65536] Wed Mar 13 22:58:22 2013 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Wed Mar 13 22:58:22 2013 Local Options hash (VER=V4): '41690919' Wed Mar 13 22:58:22 2013 Expected Remote Options hash (VER=V4): '530fdded' Wed Mar 13 22:58:22 2013 UDPv4 link local: [undef] Wed Mar 13 22:58:22 2013 UDPv4 link remote: 54.234.43.171:1194 Wed Mar 13 22:58:22 2013 TLS: Initial packet from 54.234.43.171:1194, sid=ffbaf343 d0c1a266 Wed Mar 13 22:58:22 2013 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funst ... ost.domain Wed Mar 13 22:58:22 2013 VERIFY OK: nsCertType=SERVER Wed Mar 13 22:58:22 2013 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funst ... ost.domain Wed Mar 13 22:58:23 2013 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Mar 13 22:58:23 2013 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Mar 13 22:58:23 2013 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Mar 13 22:58:23 2013 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Mar 13 22:58:23 2013 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Mar 13 22:58:23 2013 [ec2-server] Peer Connection Initiated with 54.234.43.171:1194 Wed Mar 13 22:58:25 2013 SENT CONTROL [ec2-server]: 'PUSH_REQUEST' (status=1) Wed Mar 13 22:58:25 2013 PUSH: Received control message: 'PUSH_REPLY,route 10.8.0.0 255.255.255.0,topology net30,ping 10,ping-restart 120,ifconfig 10.8.0.6 10.8.0.5' Wed Mar 13 22:58:25 2013 OPTIONS IMPORT: timers and/or timeouts modified Wed Mar 13 22:58:25 2013 OPTIONS IMPORT: --ifconfig/up options modified Wed Mar 13 22:58:25 2013 OPTIONS IMPORT: route options modified Wed Mar 13 22:58:25 2013 ROUTE default_gateway=0.0.0.0 Wed Mar 13 22:58:25 2013 TUN/TAP device /dev/tun0 opened Wed Mar 13 22:58:25 2013 /sbin/ifconfig tun0 delete ifconfig: ioctl (SIOCDIFADDR): Can't assign requested address Wed Mar 13 22:58:25 2013 NOTE: Tried to delete pre-existing tun/tap instance -- No Problem if failure Wed Mar 13 22:58:25 2013 /sbin/ifconfig tun0 10.8.0.6 10.8.0.5 mtu 1500 netmask 255.255.255.255 up Wed Mar 13 22:58:25 2013 /sbin/route add -net 10.8.0.0 10.8.0.5 255.255.255.0 add net 10.8.0.0: gateway 10.8.0.5 Wed Mar 13 22:58:25 2013 Initialization Sequence Completed ^CWed Mar 13 22:58:30 2013 event_wait : Interrupted system call (code=4) Wed Mar 13 22:58:30 2013 TCP/UDP: Closing socket Wed Mar 13 22:58:30 2013 /sbin/route delete -net 10.8.0.0 10.8.0.5 255.255.255.0 delete net 10.8.0.0: gateway 10.8.0.5 Wed Mar 13 22:58:30 2013 Closing TUN/TAP interface Wed Mar 13 22:58:30 2013 SIGINT[hard,] received, process exiting toto5:ttntec2 Dominic$ sudo openvpn2 --config client.conf --remote ec2-54-234-43-171.compute-1.amazonaws.com Wed Mar 13 22:58:57 2013 OpenVPN 2.2.1 x86_64-apple-darwin12.2.0 [SSL] [LZO2] [eurephia] built on Mar 4 2013 Wed Mar 13 22:58:57 2013 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Wed Mar 13 22:58:57 2013 LZO compression initialized Wed Mar 13 22:58:57 2013 Control Channel MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Wed Mar 13 22:58:57 2013 Socket Buffers: R=[196724->65536] S=[9216->65536] Wed Mar 13 22:58:57 2013 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Wed Mar 13 22:58:57 2013 Local Options hash (VER=V4): '41690919' Wed Mar 13 22:58:57 2013 Expected Remote Options hash (VER=V4): '530fdded' Wed Mar 13 22:58:57 2013 UDPv4 link local: [undef] Wed Mar 13 22:58:57 2013 UDPv4 link remote: 54.234.43.171:1194 Wed Mar 13 22:58:57 2013 TLS: Initial packet from 54.234.43.171:1194, sid=a0d75468 ec26de14 Wed Mar 13 22:58:58 2013 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funst ... ost.domain Wed Mar 13 22:58:58 2013 VERIFY OK: nsCertType=SERVER Wed Mar 13 22:58:58 2013 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funst ... ost.domain Wed Mar 13 22:58:58 2013 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Mar 13 22:58:58 2013 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Mar 13 22:58:58 2013 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Mar 13 22:58:58 2013 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Mar 13 22:58:58 2013 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Mar 13 22:58:58 2013 [ec2-server] Peer Connection Initiated with 54.234.43.171:1194 Wed Mar 13 22:59:00 2013 SENT CONTROL [ec2-server]: 'PUSH_REQUEST' (status=1) Wed Mar 13 22:59:00 2013 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1 bypass-dhcp,route 10.8.0.0 255.255.255.0,topology net30,ping 10,ping-restart 120,ifconfig 10.8.0.6 10.8.0.5' Wed Mar 13 22:59:00 2013 OPTIONS IMPORT: timers and/or timeouts modified Wed Mar 13 22:59:00 2013 OPTIONS IMPORT: --ifconfig/up options modified Wed Mar 13 22:59:00 2013 OPTIONS IMPORT: route options modified Wed Mar 13 22:59:00 2013 ROUTE default_gateway=0.0.0.0 Wed Mar 13 22:59:00 2013 TUN/TAP device /dev/tun0 opened Wed Mar 13 22:59:00 2013 /sbin/ifconfig tun0 delete ifconfig: ioctl (SIOCDIFADDR): Can't assign requested address Wed Mar 13 22:59:00 2013 NOTE: Tried to delete pre-existing tun/tap instance -- No Problem if failure Wed Mar 13 22:59:00 2013 /sbin/ifconfig tun0 10.8.0.6 10.8.0.5 mtu 1500 netmask 255.255.255.255 up Wed Mar 13 22:59:00 2013 /sbin/route add -net 54.234.43.171 0.0.0.0 255.255.255.255 add net 54.234.43.171: gateway 0.0.0.0 Wed Mar 13 22:59:00 2013 /sbin/route add -net 0.0.0.0 10.8.0.5 128.0.0.0 add net 0.0.0.0: gateway 10.8.0.5 Wed Mar 13 22:59:00 2013 /sbin/route add -net 128.0.0.0 10.8.0.5 128.0.0.0 add net 128.0.0.0: gateway 10.8.0.5 Wed Mar 13 22:59:00 2013 /sbin/route add -net 10.8.0.0 10.8.0.5 255.255.255.0 add net 10.8.0.0: gateway 10.8.0.5 Wed Mar 13 22:59:00 2013 Initialization Sequence Completed Wed Mar 13 22:59:00 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:00 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:01 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:01 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:01 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) ... The routing table after a connection WITHOUT the push redirect-gateway (all traffic is not redirected to the VPN and everything is working fine, I can ping or ssh the OpenVPN server and access all other Internet resources through my default gateway): Destination Gateway Flags Refs Use Netif Expire default user148-1.wireless UGSc 50 0 en1 10.8/24 10.8.0.5 UGSc 2 7 tun0 10.8.0.5 10.8.0.6 UH 3 2 tun0 127 localhost UCS 0 0 lo0 localhost localhost UH 6 6692 lo0 client.openvpn.net client.openvpn.net UH 3 18 lo0 142.1.148/22 link#5 UCS 2 0 en1 user148-1.wireless 0:90:b:27:10:71 UHLWIir 50 0 en1 76 user150-173.wirele localhost UHS 0 0 lo0 142.1.151.255 ff:ff:ff:ff:ff:ff UHLWbI 0 2 en1 169.254 link#5 UCS 1 0 en1 169.254.255.255 0:90:b:27:10:71 UHLSWi 0 0 en1 71 The routing table after a connection with the push redirect-gateway option enable as in the server.conf file above (all internet traffic should be redirected to the VPN tunnel, but nothing is working, I can't access any Internet ressources at all): Destination Gateway Flags Refs Use Netif Expire 0/1 10.8.0.5 UGSc 1 0 tun0 default user148-1.wireless UGSc 7 0 en1 10.8/24 10.8.0.5 UGSc 0 0 tun0 10.8.0.5 10.8.0.6 UHr 6 0 tun0 54.234.43.171/32 0.0.0.0 UGSc 1 0 en1 127 localhost UCS 0 0 lo0 localhost localhost UH 3 6698 lo0 client.openvpn.net client.openvpn.net UH 0 27 lo0 128.0/1 10.8.0.5 UGSc 2 0 tun0 142.1.148/22 link#5 UCS 1 0 en1 user148-1.wireless 0:90:b:27:10:71 UHLWIir 1 0 en1 833 user150-173.wirele localhost UHS 0 0 lo0 169.254 link#5 UCS 1 0 en1 169.254.255.255 0:90:b:27:10:71 UHLSW 0 0 en1

    Read the article

  • How to manage memory using classes in Objective-C?

    - by Flipper
    This is my first time creating an iPhone App and I am having difficulty with the memory management because I have never had to deal with it before. I have a UITableViewController and it all works fine until I try to scroll down in the simulator. It crashes saying that it cannot allocate that much memory. I have narrowed it down to where the crash is occurring: - (UITableViewCell *)tableView:(UITableView *)aTableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { // Dequeue or create a cell UITableViewCellStyle style = UITableViewCellStyleDefault; UITableViewCell *cell = [aTableView dequeueReusableCellWithIdentifier:@"BaseCell"]; if (!cell) cell = [[[UITableViewCell alloc] initWithStyle:style reuseIdentifier:@"BaseCell"] autorelease]; NSString* crayon; // Retrieve the crayon and its color if (aTableView == self.tableView) { crayon = [[[self.sectionArray objectAtIndex:indexPath.section] objectAtIndex:indexPath.row] getName]; } else { crayon = [FILTEREDKEYS objectAtIndex:indexPath.row]; } cell.textLabel.text = crayon; if (![crayon hasPrefix:@"White"]) cell.textLabel.textColor = [self.crayonColors objectForKey:crayon]; else cell.textLabel.textColor = [UIColor blackColor]; return cell; } Here is the getName method: - (NSString*)getName { return name; } name is defined as: @property (nonatomic, retain) NSString *name; Now sectionArray is an NSMutableArray with instances of a class that I created Term in it. Term has a method getName that returns a NSString*. The problem seems to be the part of where crayon is being set and getName is being called. I have tried adding autorelease, release, and other stuff like that but that just causes the entire app to crash before even launching. Also if I do: cell.textLabel.text = @"test"; //crayon; /*if (![crayon hasPrefix:@"White"]) cell.textLabel.textColor = [self.crayonColors objectForKey:crayon]; else cell.textLabel.textColor = [UIColor blackColor];*/ Then I get no error whatsoever and it all scrolls just fine. Thanks in advance for the help! Edit: Here is the full Log of when I try to run the app and the error it gives when it crashes: [Session started at 2010-12-29 04:23:38 -0500.] [Session started at 2010-12-29 04:23:44 -0500.] GNU gdb 6.3.50-20050815 (Apple version gdb-967) (Tue Jul 14 02:11:58 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-apple-darwin".sharedlibrary apply-load-rules all Attaching to process 1429. gdb-i386-apple-darwin(1430,0x778720) malloc: * mmap(size=1420296192) failed (error code=12) error: can't allocate region ** set a breakpoint in malloc_error_break to debug gdb stack crawl at point of internal error: [ 0 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (align_down+0x0) [0x1222d8] [ 1 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (xstrvprintf+0x0) [0x12336c] [ 2 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (xmalloc+0x28) [0x12358f] [ 3 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (dyld_info_read_raw_data+0x50) [0x1659af] [ 4 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (dyld_info_read+0x1bc) [0x168a58] [ 5 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (macosx_dyld_update+0xbf) [0x168c9c] [ 6 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (macosx_solib_add+0x36b) [0x169fcc] [ 7 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (macosx_child_attach+0x478) [0x17dd11] [ 8 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (attach_command+0x5d) [0x64ec5] [ 9 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (mi_cmd_target_attach+0x4c) [0x15dbd] [ 10 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (captured_mi_execute_command+0x16d) [0x17427] [ 11 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (catch_exception+0x41) [0x7a99a] [ 12 ] /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/libexec/gdb/gdb-i386-apple-darwin (mi_execute_command+0xa9) [0x16f63] /SourceCache/gdb/gdb-967/src/gdb/utils.c:1144: internal-error: virtual memory exhausted: can't allocate 1420296192 bytes. A problem internal to GDB has been detected, further debugging may prove unreliable. The Debugger has exited with status 1.The Debugger has exited with status 1. Here is the backtrace that I get when I set the breakpoint for malloc_error_break: #0 0x0097a68c in objc_msgSend () #1 0x01785bef in -[UILabel setText:] () #2 0x000030e0 in -[TableViewController tableView:cellForRowAtIndexPath:] (self=0x421d760, _cmd=0x29cfad8, aTableView=0x4819600, indexPath=0x42190f0) at /Volumes/Main2/Enayet/TableViewController.m:99 #3 0x016cee0c in -[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:withIndexPath:] () #4 0x016c6a43 in -[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:] () #5 0x016d954f in -[UITableView(_UITableViewPrivate) _updateVisibleCellsNow] () #6 0x016d08ff in -[UITableView layoutSubviews] () #7 0x03e672b0 in -[CALayer layoutSublayers] () #8 0x03e6706f in CALayerLayoutIfNeeded () #9 0x03e668c6 in CA::Context::commit_transaction () #10 0x03e6653a in CA::Transaction::commit () #11 0x03e6e838 in CA::Transaction::observer_callback () #12 0x00b00252 in __CFRunLoopDoObservers () #13 0x00aff65f in CFRunLoopRunSpecific () #14 0x00afec48 in CFRunLoopRunInMode () #15 0x00156615 in GSEventRunModal () #16 0x001566da in GSEventRun () #17 0x01689faf in UIApplicationMain () #18 0x00002398 in main (argc=1, argv=0xbfffefb0) at /Volumes/Main2/Enayet/main.m:14

    Read the article

  • Implementing a 1 to many relationship with SQLite

    - by Patrick
    I have the following schema implemented successfully in my application. The application connects desk unit channels to IO unit channels. The DeskUnits and IOUnits tables are basically just a list of desk/IO units and the number of channels on each. For example a desk could be 4 or 12 channel. CREATE TABLE DeskUnits (Name TEXT, NumChannels NUMERIC); CREATE TABLE IOUnits (Name TEXT, NumChannels NUMERIC); CREATE TABLE RoutingTable (DeskUnitName TEXT, DeskUnitChannel NUMERIC, IOUnitName TEXT, IOUnitChannel NUMERIC); The RoutingTable 'table' then connects each DeskUnit channel to an IOUnit channel. For example the DeskUnit called "Desk1" channel 1 may route to IOunit name "IOUnit1" channel 2, etc. So far I hope this is pretty straightforward and understandable. The problem is, however, this is a strictly 1 to 1 relationship. Any DeskUnit channel can route to only 1 IOUnit channel. Now, I need to implement a 1 to many relationship. Where any DeskUnit channel can connect to multiple IOUnit channels. I realise I may have to rearrange the tables completely, but I am not sure the best way to go about this. I am fairly new to SQLite and databases in general so any help would be appreciated. Thanks Patrick

    Read the article

  • Losing NSManaged Objects in my Application

    - by Wayfarer
    I've been doing quite a bit of work on a fun little iPhone app. At one point, I get a bunch of player objects from my Persistant store, and then display them on the screen. I also have the options of adding new player objects (their just custom UIButtons) and removing selected players. However, I believe I'm running into some memory management issues, in that somehow the app is not saving which "players" are being displayed. Example: I have 4 players shown, I select them all and then delete them all. They all disappear. But if I exit and then reopen the application, they all are there again. As though they had never left. So somewhere in my code, they are not "really" getting removed. MagicApp201AppDelegate *appDelegate = [[UIApplication sharedApplication] delegate]; context = [appDelegate managedObjectContext]; NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *desc = [NSEntityDescription entityForName:@"Player" inManagedObjectContext:context]; [request setEntity:desc]; NSError *error; NSMutableArray *objects = [[[context executeFetchRequest:request error:&error] mutableCopy] autorelease]; if (objects == nil) { NSLog(@"Shit man, there was an error taking out the single player object when the view did load. ", error); } int j = 0; while (j < [objects count]) { if ([[[objects objectAtIndex:j] valueForKey:@"currentMultiPlayer"] boolValue] == NO) { [objects removeObjectAtIndex:j]; j--; } else { j++; } } [self setPlayers:objects]; //This is a must, it NEEDS to work Objects are all the players playing So in this snippit (in the viewdidLoad method), I grab the players out of the persistant store, and then remove the objects I don't want (those whose boolValue is NO), and the rest are kept. This works, I'm pretty sure. I think the issue is where I remove the players. Here is that code: NSLog(@"Remove players"); /** For each selected player: Unselect them (remove them from SelectedPlayers) Remove the button from the view Remove the button object from the array Remove the player from Players */ NSLog(@"Debugging Removal: %d", [selectedPlayers count]); for (int i=0; i < [selectedPlayers count]; i++) { NSManagedObject *rPlayer = [selectedPlayers objectAtIndex:i]; [rPlayer setValue:[NSNumber numberWithBool:NO] forKey:@"currentMultiPlayer"]; int index = [players indexOfObjectIdenticalTo:rPlayer]; //this is the index we need for (int j = (index + 1); j < [players count]; j++) { UIButton *tempButton = [playerButtons objectAtIndex:j]; tempButton.tag--; } NSError *error; if ([context hasChanges] && ![context save:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } UIButton *aButton = [playerButtons objectAtIndex:index]; [players removeObjectAtIndex:index]; [aButton removeFromSuperview]; [playerButtons removeObjectAtIndex:index]; } [selectedPlayers removeAllObjects]; NSError *error; if ([context hasChanges] && ![context save:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } NSLog(@"About to refresh YES"); [self refreshAllPlayers:YES]; The big part in the second code snippet is I set them to NO for currentMultiPlayer. NO NO NO NO NO, they should NOT come back when the view does load, NEVER ever ever. Not until I say so. No other relevant part of the code sets that to YES. Which makes me think... perhaps they aren't being saved. Perhaps that doesn't save, perhaps those objects aren't being managed anymore, and so they don't get saved in. Is there a lifetime (metaphorically) of NSManaged object? The Players array is the same I set in the "viewDidLoad" method, and SelectedPlayers holds players that are selected, references to NSManagedObjects. Does it have something to do with Removing them from the array? I'm so confused, some insight would be greatly appreciated!!

    Read the article

  • Quantifying the Performance of Garbage Collection vs. Explicit Memory Management

    - by EmbeddedProg
    I found this article here: Quantifying the Performance of Garbage Collection vs. Explicit Memory Management http://www.cs.umass.edu/~emery/pubs/gcvsmalloc.pdf In the conclusion section, it reads: Comparing runtime, space consumption, and virtual memory footprints over a range of benchmarks, we show that the runtime performance of the best-performing garbage collector is competitive with explicit memory management when given enough memory. In particular, when garbage collection has five times as much memory as required, its runtime performance matches or slightly exceeds that of explicit memory management. However, garbage collection’s performance degrades substantially when it must use smaller heaps. With three times as much memory, it runs 17% slower on average, and with twice as much memory, it runs 70% slower. Garbage collection also is more susceptible to paging when physical memory is scarce. In such conditions, all of the garbage collectors we examine here suffer order-of-magnitude performance penalties relative to explicit memory management. So, if my understanding is correct: if I have an app written in native C++ requiring 100 MB of memory, to achieve the same performance with a "managed" (i.e. garbage collector based) language (e.g. Java, C#), the app should require 5*100 MB = 500 MB? (And with 2*100 MB = 200 MB, the managed app would run 70% slower than the native app?) Do you know if current (i.e. latest Java VM's and .NET 4.0's) garbage collectors suffer the same problems described in the aforementioned article? Has the performance of modern garbage collectors improved? Thanks.

    Read the article

  • Maven2 - problem with pluginManagement and parent-child relationship

    - by Newtopian
    from maven documentation pluginManagement: is an element that is seen along side plugins. Plugin Management contains plugin elements in much the same way, except that rather than configuring plugin information for this particular project build, it is intended to configure project builds that inherit from this one. However, this only configures plugins that are actually referenced within the plugins element in the children. The children have every right to override pluginManagement definitions. Now : if I have this in my parent POM <build> <pluginManagement> <plugins> <plugin> <artifactId>maven-dependency-plugin</artifactId> <version>2.0</version> <executions> Some stuff for the children </execution> </executions> </plugin> </plugins> </pluginManagement> </build> and I run mvn help:effective-pom on the parent project I get what I want, namely the plugins part directly under build (the one doing the work) remains empty. Now if I do the following : <build> <pluginManagement> <plugins> <plugin> <artifactId>maven-dependency-plugin</artifactId> <version>2.0</version> <executions> Some stuff for the children </execution> </executions> </plugin> </plugins> </pluginManagement> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>2.0.2</version> <inherited>true</inherited> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> </plugins> </build> mvn help:effective-pom I get again just what I want, the plugins contains just what is declared and the pluginManagement section is ignored. BUT changing with the following <build> <pluginManagement> <plugins> <plugin> <artifactId>maven-dependency-plugin</artifactId> <version>2.0</version> <executions> Some stuff for the children </execution> </executions> </plugin> </plugins> </pluginManagement> <plugins> <plugin> <artifactId>maven-dependency-plugin</artifactId> <version>2.0</version> <inherited>false</inherited> <!-- this perticular config is NOT for kids... for parent only --> <executions> some stuff for adults only </execution> </executions> </plugin> </plugins> </build> and running mvn help:effective-pom the stuff from pluginManagement section is added on top of what is declared already. as such : <build> <pluginManagement> ... </pluginManagement> <plugins> <plugin> <artifactId>maven-dependency-plugin</artifactId> <version>2.0</version> <inherited>false</inherited> <!-- this perticular config is NOT for kids... for parent only --> <executions> Some stuff for the children </execution> <executions> some stuff for adults only </execution> </executions> </plugin> </plugins> </build> Is there a way to exclude the part for children from the parent pom's section ? In effect what I want is for the pluginManagement to behave exactly as the documentation states, that is I want it to apply for children only but not for the project in which it is declared. As a corrolary, is there a way I can override the parts from the pluginManagement by declaring the plugin in the normal build section of a project ? whatever I try I get that the section is added to executions but I cannot override one that exists already. EDIT: I never did find an acceptable solution for this and as such the issue remains open. Closest solution was offered below and is currently the accepted solution for this question until something better comes up. Right now there are three ways to achieve the desired result (modulate plugin behaviour depending on where in the inheritance hierarchy the current POM is): 1 - using profiles, it will work but you must beware that profiles are not inherited, which is somewhat counter intuitive. They are (if activated) applied to the POM where declared and then this generated POM is propagated down. As such the only way to activate the profile for child POM is specifically on the command line (least I did not find another way). Property, file and other means of activation fail to activate the POM because the trigger is not in the POM where the profile is declared. 2 - (this is what I ended up doing) Declare the plugin as not inherited in the parent and re-declare (copy-paste) the tidbit in every child where it is wanted. Not ideal but it is simple and it works. 3 - Split the aggregation nature and parent nature of the parent POM. Then since the part that only applies to the parent is in a different project it is now possible to use pluginManagement as firstly intended. However this means that a new artificial project must be created that does not contribute to the end product but only serves the could system. This is clear case of conceptual bleed. Also this only applies to my specific and is hard to generalize, so I abandoned efforts to try and make this work in favor of the not-pretty but more contained cut and paste patch described in 2. If anyone coming across this question has a better solution either because of my lack of knowledge of Maven or because the tool evolved to allow this please post the solution here for future reference. Thank you all for your help :-)

    Read the article

  • Something about Property Management or &hellip; the understanding of SharePoint Admins/roles ?!?

    - by Enrique Lima
    When I talk about SharePoint, for some reason it comes to my mind as if it were property management and all the tasks associated with it. So, imagine you have a lot ( a piece of land of sorts), you then decide there is something you want to do with it.  So, you make the choice of having a building built.  Now, in order to go forward with your plan, you need to check what the rules/regulations are.  Has is it been zoned residential, commercial, industrial … you get the idea.  This to me sounds like Governance.  The what am I to do given a defined set of rules. We keep on moving forward based on those rules.  And with this we start the process of building, the building process takes us to survey the land, identify what our boundaries are.  And as we go along we start getting the idea in our head as to what we will do as far as the building goes.  We identify the essentials of the building, basic services and such.  All in all, we plan.  And as with many things we do, we like solid foundations.  What a solid foundation looks like will depend on where and what we build.  The way buildings are built depends in many ways in being able to foresee the potential for natural disasters or to try to leverage the lay of the land.  Sound familiar?  We have done our Requirements Gathering. We have the building in place, we have followed the zoning rules, we have implemented services.  But we need someone to manage the building, now we move on to the human side of the story.  We want to establish a means to normalcy in the building, someone that can be the monitoring agent as to the “what’s going on?” of it.  This person will be tasked with making sure all basic services are functional, that measures are taken if there is an issue and so on.  Enter the Farm Administrator. In a way, we establish an extension of the rules to make sure the building and the apartments/offices build follow a standard set of rules too. Now, in turn you will have people leasing or buying the apartments/offices, they will be the keepers of that space.  So, now we are building sites, we have moved from having the building (farm) ready, to leasing/selling offices/apartments (site collections).  There will be someone assuming responsibility for those offices, that person will authorize or be informed about activities and also who not only gets a code into the building, but perhaps a key to the office.  Enter Site Collection Administrator.  And then perhaps we move on to the person that would be responsible for specifics within the office, for example a Human Resources Manager or Coordinator.  They will have specific control and knowledge about people.  A facilities coordinator, and so on.  I would translate that into Site Administrators. With that said then, we identify the following: Role Name Responsibility (but not limited to) Farm Administrator Infrastructure Site Collection Admin Policies for Content, Hierarchy, Recycle Bin, Security and Access Site Owner (Site Admin) Security and Access, Training, Guidance, Manage Templates All in all there are different levels of responsibility to be handled, but it is very important to understand what they are and what they mean. Here is a link to very well laid out explanation on this … http://www.endusersharepoint.com/2009/08/11/site-managers-and-end-user-expectations-roles-and-responsibilities/

    Read the article

  • How to set a management IP on a Dell powerconnect 5524/5548 switch?

    - by John Little
    When you first power on a 5524, connected via the serial console, you are offered a setup wizard where you can enter the management IP/Net/Gateway and enter the admin password. HOWEVER, if you dont do this in 60 seconds, the wizard dissapears, and there seems to be no way to run it again - even if you reboot the box. No commands work in the CLI, it just gives you this prompt: If you type say enable, or login, it gives: >login Unknown parameter May be one from the following list: debug help So no commands seem to work. The CLI reference guide does not seem to have any way to run the wizard, or to set the management port or admin passwords. So by not responding in 60 secons after boot, the unit is bricked. Any ideas?

    Read the article

  • Why does my Intel HDA onboard sound card not have a "Mix" device / channel?

    - by Hanno Fietz
    I want to be able to record what my sound card outputs on the speakers / headphones. This question is all over the interwebs again and again, and there seem to be two outcomes: in your selection of audio input devices, there's a device called "Stereo Mix", or similar, which is the "loopback" device for audio. Choose that in your recording tool and you're done. there's no such device and only speculative posts about why that may be. Now, I'm using ALSA and an Intel HDA chipset on my mainboard under Kubuntu Karmic. I have some 5-10 output channels and "Mic", "Front Mic" and "Line" for input. All of those are available in KMix, Audacity and other software. No "loopback" / "Mix" / whatever. Do I have to get some driver / kernel module set up ALSA in some way set up my system configuration in some way use a software solution (such as JACK) I had a look at JACK, and found it rather hard to understand, it's either an expert tool or just clumsy, I couldn't say. At least, I wasn't able to figure out how to achieve what I wanted. One of my problems seems to be that I don't understand where and how the mixing happens. Are there sound cards which just aren't able to do it? Why does the sound card matter at all, since I could in theory grab the data stream at some point before it goes to the hardware, right?

    Read the article

  • How can I get Opera speed-dial and password management features in other browsers?

    - by Howard Guo
    I heavily rely on Opera's speed dial and password management features. Lack of these two features is really stopping me from switching to another web browser such as Chrome or Firefox. Opera's password management has two unique characteristics which I rely on heavily: It saves passwords on all pages, (apparently) despite the page's meta data asking not to save passwords. It offers keyboard shortcut and button to automatically fill in username/passwords and all other fields in a login form, then automatically submit the form. (So I'm only one key/click away from logging into a website) How can I get those functions in other browsers? Thank you! Edit: Reason being that I use other web browsers at work and I wish they could have those functions.

    Read the article

  • SQL Server Management Studio Reports: Why no open transactions?

    - by Sleepless
    On a server with several hundred user connections, when I open the SQL Server 2008 SP1 Management Studio report "Database - User Statistics", the result page shows the following results: Login Name: appUser Active Sessions: 243 Active Connections: 243 Open Transactions: 374 Still, when I open the report "Database - All Transactions" on the same DB, it doesn't show any connections ("Currently, there are no transactions running for [Database Name] Database"). What gives? Is this a bug in Management Studio? This is not the only report where this kind of behavious happens... Thanks all!

    Read the article

  • Why is cpu power management not working in Server 2012 with Hyper-V?

    - by Roland
    We've been using Server2008R2 with Hyper-V for a couple of years now and chose it at the time because of its ability to make use of Intel SpeedStep and AMD PowerNow! Now with Server 2012 and Hyper-V V3, all power management abilities seem to be gone. The CPUs are always at full speed and our servers need twice the energy as before while idling. (Yes, the CPU P-states are enabled in the BIOS) Is this by design? Is there a workaround to enable cpu power management again? Despite the great new features of Hyper-V 3, this would be a show-stopper for us since we are very concerned about energy consumption.

    Read the article

  • How to stop windows resizing when the monitor display channel is turned off / switched to different source

    - by Heartspeace
    Have a new 6870 ati radeon adapter with its drivers set to 1080p 60hz resolution hooked up to a 2008 47" high end Samsung HDMI based TV. However, when the tv is turned to a different hdmi input -(when I come back into windows) somehow Windows decides to resize all the open apps to a lower resolution - including some of the side docked hidden pop-outs. When it resizes those though - it just sticked the pop-outs in the middle of the screen and all the resized windows from the open applications in the top left corner - all of them stacked on top of each other and resized to the smaller resolution. The things that seem to be ok after returning are the icons on the desktop, the taskbar, and the sidebar. Anyone have any knowledge of 1) how this happens 2) why it happens 3) how to stop it from resizing the applications and some of the docked pop-outs (they are not really resized after returning - they are just stuck in the middle of the screen approximately where they would be if the right or bottom sidebar should be if the screen was resized to that lower resolution). My hypothesis is that upon losing HDMI signal - that Windows is told by something (driver, or windows itself) that the resolution to be without a signal being present (noting that HDMI signals and handshakes are two way on HDMI devices. If it loses the signal or the tv is switched to another device - then the display adapter must figure that out and tell Windows or figures it out and designs randomly to change the display size). Any and all help is most appreciated. I asked AMD/ATI - but they said they don't know why or how this is happening. I was hoping that maybe this is THE place that the super users truly go to - those that develop display adapter drivers, or that dive deeply into these areas of windows. If there is better sites or just competing sites - please advise - noting I have already written AMD/ATI. HP Response / Additions 4/7/2011 It is really nice to get your reply Shinrai. (BTW is it proper etiquette on these forums to have a discussion?) Yet 'only one issue' - I am using a single display in this case - so Windows doesn't move application windows to another desktop. Windows (or something) decides to shrink the desktop it currently has and resize all windows to the maximum size of the desktop. As such I would be glad if Windows would just keep the current size of the one desktop that is in operation. I also know that this does NOT happen on monitors connected with DVI. There I have had one and two monitors setup and it doesn't resize those screens at all when disconnecting monitors, turning them off, whatever... they stay solid - everything in place - to such an extent that if you forgot the other monitor is off - you will have troubles finding some windows without using one of the control app utilities. So if I could even get the HDMI handling by Windows (or the display driver) ( 1] which is doing this anyway the display driver or Windows - and 2] where is that other resolution size (1024x768) coming from - its not the smallest and its not the largest?) to be having like DVI - Life would be golden (for this aspect anyway). ** found others with same problem in this thread: http://hardforum.com/showthread.php?t=1507324 Thanks, HP

    Read the article

  • Missing menu items for Azure SQL tables within SQL Server Management Studio?

    - by Sid
    I have a table (say Table1) that is replicated via SQL Data Sync Agent across a local SQL Server 2012 as well as an Azure SQL Server (part of Microsoft Azure). Everything about Table1 (schema, table values etc ) is identical to the best of my understanding. However, when I list and right click Table1 from Microsoft SQL Server Management Studio 2012 (SSMS), I get some very different menu options, even for seemingly basic stuff. Lets focus only on the 'Design' menu item: It is visible for Table1 on the local SQL server in SSMS It is missing for Table1 on Azure SQL via SSMS It is visible for Table1 (as Open Table Definition) on Azure SQL when reaching it via Visual Studio 2012 (Server Explorer - Data connections) This is seen in the screenshots below: Now I use scripts from some real stuff (esp when I need to check in the SQL scripts etc) but this difference concerns me to some extent. Am I witnessing just a tools artifact in SQL Server Management Studio when connecting to Azure SQL? or is it something more serious about limitations of Azure SQL itself (although, just seeing the Design surface is so basic!)?

    Read the article

  • Recoomend company to take care or webserver and wordpress management?

    - by javipas
    I'm interested in setting up a professional WordPress site but I'd like to explore the pssibilities to leave the management of the webserver and even WordPress' management to a company that guarantees great availability, performance of the site (load times, security) and even SEO. My site is currently running on other platform but I plan on a migration on the next 4 weeks. I've done this usually, but I'd like to focus on the content, so I don't have to mess with webserver/mysql/php configs in order to get nice performance. Is there some (maybe hosting) company that is dedicated to this? Would it be better to hire a sysadmin with experience in those matters?

    Read the article

  • Does SQL Server Management Studio 2008 Activity Monitor work with SQL Server 2000?

    - by Andrew Janke
    I am trying to use SQL Server Management Studio 2008's Activity Monitor with an SQL Server 2000 instance to diagnose some query performance issues. I can connect SMSS 2008 to the db fine, and use it to browse objects and run queries. But when I press the Activity Monitor button, it pops up an error message saying: Microsoft SQL Server Management Studio This operation does not support connections to Microsoft SQL Server Personal Edition version 8.00.818. This MSDN article implies that Activity Monitor works with SQL Server 2000. Is it the fact that it's Personal Edition that's preventing it from working? The error message isn't clear whether it's the edition or version that's the problem.

    Read the article

  • Connecting to MSSQL Express in silverlight 4 appl, the db doesn't shows up in Management Studio Expr

    - by Gabriel
    I'm using MSSQLExpress named instance in my Silverlight 4 application. The database located in the web application data folder. I attached the db via VS2010. The program works, but the db doesn't show up in Management Studio Express. If I delete the connection from within VS2010, and Try to attach to db via Management Studio Express, on writes, that the database with same the name already exists. Why the database connected via VS2010 doesn't show up in Management Studio Express? Thanks in advance Gabor

    Read the article

  • IE9, LightSwitch Beta 2 and Zune HD: A Study in Risk Management?

    - by andrewbrust
    Photo by parl, 'Risk.’ Under Creative Commons Attribution-NonCommercial-NoDerivs License This has been a busy week for Microsoft, and for me as well.  On Monday, Microsoft launched Internet Explorer 9 at South by Southwest (SXSW) in Austin, TX.  That evening I flew from New York to Seattle.  On Tuesday morning, Microsoft launched Visual Studio LightSwitch, Beta 2 with a Go-Live license, in Redmond, and I had the privilege of speaking at the keynote presentation where the announcement was made.  Readers of this blog know I‘m a fan of LightSwitch, so I was happy to tell the app dev tools partners in the audience that I thought the LightSwitch extensions ecosystem represented a big opportunity – comparable to the opportunity when Visual Basic 1.0 was entering its final beta roughly 20 years ago.  On Tuesday evening, I flew back to New York (and wrote most of this post in-flight). Two busy, productive days.  But there was a caveat that impacts the accomplishments, because Monday was also the day reports surfaced from credible news agencies that Microsoft was discontinuing its dedicated Zune hardware efforts.  While the Zune brand, technology and service will continue to be a component of Windows Phone and a piece of the Xbox puzzle as well, speculation is that Microsoft will no longer be going toe-to-toe with iPod touch in the portable music player market. If we take all three of these developments together (even if one of them is based on speculation), two interesting conclusions can reasonably be drawn, one good and one less so. Microsoft is doubling down on technologies it finds strategic and de-emphasizing those that it does not.  HTML 5 and the Web are strategic, so here comes IE9, and it’s a very good browser.  Try it and see.  Silverlight is strategic too, as is SQL Server, Windows Azure and SQL Azure, so here comes Visual Studio LightSwitch Beta 2 and a license to deploy its apps to production.  Downloads of that product have exceeded Microsoft’s projections by more than 50%, and the company is even citing analyst firms’ figures covering the number of power-user developers that might use it. (I happen to think the product will be used by full-fledged developers as well, but that’s a separate discussion.) Windows Phone is strategic too…I wasn’t 100% positive of that before, but the Nokia agreement has made me confident.  Xbox as an entertainment appliance is also strategic.  Standalone music players are not strategic – and even if they were, selling them has been a losing battle for Microsoft.  So if Microsoft has consolidated the Zune content story and the ZunePass subscription into Xbox and Windows Phone, it would make sense, and would be a smart allocation of resources.  Essentially, it would be for the greater good. But it’s not all good.  In this scenario, Zune player customers would lose out.  Unless they wanted to switch to Windows Phone, and then use their phone’s battery for the portable media needs, they’re going to need a new platform.  They’re going to feel abandoned.  Even if Zune lives, there have been other such cul de sacs for customers.  Remember SPOT watches?  Live Spaces?  The original Live Mesh?  Microsoft discontinued each of these products.  The company is to be commended for cutting its losses, as admitting a loss isn’t easy.  But Redmond won’t be well-regarded by the victims of those decisions.  Instead, it gets black marks. What’s the answer?  I think it’s a bit like the 1980’s New York City “don’t block the box” gridlock rules: don’t enter an intersection unless you see a clear path through it.  If the light turns red and you’re blocking the perpendicular traffic, that’s your fault in judgment.  You get fined and get points on your license and you don’t get to shrug it off as beyond your control.  Accountability is key.  The same goes for Microsoft.  If it decides to enter a market, it should see a reasonable path through success in that market. Switching analogies, Microsoft shouldn’t make investments haphazardly, and it certainly shouldn’t ask investors to buy into a high-risk fund that is sold as safe and which offers only moderate returns.  People won’t continue to invest with a fund manager with a track record of over-zealous, imprudent, sub-prime investments.  The same is true on the product side for Microsoft, and not just with music players and geeky wrist watches.  It’s true of Web browsers, and line-of-business app dev tools, and smartphones, and cloud platforms and operating systems too.  When Microsoft is casual about its own risk, it raises risk for its customers, and weakens its reputation, market share and credibility.  That doesn’t mean all risk is bad, but it does mean no product team’s risk should be taken lightly. For mutual fund companies, it’s the CEO’s job to give his fund managers autonomy, but to make sure they’re conforming to a standard of rational risk management.  Because all those funds carry the same brand, and many of them serve the same investors. The same goes for Microsoft, its product portfolio, its executive ranks and its product managers.

    Read the article

  • Enterprise Process Maps: A Process Picture worth a Million Words

    - by raul.goycoolea
    p { margin-bottom: 0.08in; }h1 { margin-top: 0.33in; margin-bottom: 0in; color: rgb(54, 95, 145); page-break-inside: avoid; }h1.western { font-family: "Cambria",serif; font-size: 14pt; }h1.cjk { font-family: "DejaVu Sans"; font-size: 14pt; }h1.ctl { font-size: 14pt; } Getting Started with Business Transformations A well-known proverb states that "A picture is worth a thousand words." In relation to Business Process Management (BPM), a credible analyst might have a few questions. What if the picture was taken from some particular angle, like directly overhead? What if it was taken from only an inch away or a mile away? What if the photographer did not focus the camera correctly? Does the value of the picture depend on who is looking at it? Enterprise Process Maps are analogous in this sense of relative value. Every BPM project (holistic BPM kick-off, enterprise system implementation, Service-oriented Architecture, business process transformation, corporate performance management, etc.) should be begin with a clear understanding of the business environment, from the biggest picture representations down to the lowest level required or desired for the particular project type, scope and objectives. The Enterprise Process Map serves as an entry point for the process architecture and is defined: the single highest level of process mapping for an organization. It is constructed and evaluated during the Strategy Phase of the Business Process Management Lifecycle. (see Figure 1) Fig. 1: Business Process Management Lifecycle Many organizations view such maps as visual abstractions, constructed for the single purpose of process categorization. This, in turn, results in a lesser focus on the inherent intricacies of the Enterprise Process view, which are explored in the course of this paper. With the main focus of a large scale process documentation effort usually underlying an ERP or other system implementation, it is common for the work to be driven by the desire to "get to the details," and to the type of modeling that will derive near-term tangible results. For instance, a project in American Pharmaceutical Company X is driven by the Director of IT. With 120+ systems in place, and a lack of standardized processes across the United States, he and the VP of IT have decided to embark on a long-term ERP implementation. At the forethought of both are questions, such as: How does my application architecture map to the business? What are each application's functionalities, and where do the business processes utilize them? Where can we retire legacy systems? Well-developed BPM methodologies prescribe numerous model types to capture such information and allow for thorough analysis in these areas. Process to application maps, Event Driven Process Chains, etc. provide this level of detail and facilitate the completion of such project-specific questions. These models and such analysis are appropriately carried out at a relatively low level of process detail. (see figure 2) Fig. 2: The Level Concept, Generic Process HierarchySome of the questions remaining are ones of documentation longevity, the continuation of BPM practice in the organization, process governance and ownership, process transparency and clarity in business process objectives and strategy. The Level Concept in Brief Figure 2 shows a generic, four-level process hierarchy depicting the breakdown of a "Process Area" into progressively more detailed process classifications. The number of levels and the names of these levels are flexible, and can be fit to the standards of the organization's chosen terminology or any other chosen reference model that makes logical sense for both short and long term process description. It is at Level 1 (in this case the Process Area level), that the Enterprise Process Map is created. This map and its contained objects become the foundation for a top-down approach to subsequent mapping, object relationship development, and analysis of the organization's processes and its supporting infrastructure. Additionally, this picture serves as a communication device, at an executive level, describing the design of the business in its service to a customer. It seems, then, imperative that the process development effort, and this map, start off on the right foot. Figuring out just what that right foot is, however, is critical and trend-setting in an evolving organization. Key Considerations Enterprise Process Maps are usually not as living and breathing as other process maps. Just as it would be an extremely difficult task to change the foundation of the Sears Tower or a city plan for the entire city of Chicago, the Enterprise Process view of an organization usually remains unchanged once developed (unless, of course, an organization is at a stage where it is capable of true, high-level process innovation). Regardless, the Enterprise Process map is a key first step, and one that must be taken in a precise way. What makes this groundwork solid depends on not only the materials used to construct it (process areas), but also the layout plan and knowledge base of what will be built (the entire process architecture). It seems reasonable that care and consideration are required to create this critical high level map... but what are the important factors? Does the process modeler need to worry about how many process areas there are? About who is looking at it? Should he only use the color pink because it's his boss' favorite color? Interestingly, and perhaps surprisingly, these are all valid considerations that may just require a bit of structure. Below are Three Key Factors to consider when building an Enterprise Process Map: Company Strategic Focus Process Categorization: Customer is Core End-to-end versus Functional Processes Company Strategic Focus As mentioned above, the Enterprise Process Map is created during the Strategy Phase of the Business Process Management Lifecycle. From Oracle Business Process Management methodology for business transformation, it is apparent that business processes exist for the purpose of achieving the strategic objectives of an organization. In a prescribed, top-down approach to process development, it must be ensured that each process fulfills its objectives, and in an aggregated manner, drives fulfillment of the strategic objectives of the company, whether for particular business segments or in a broader sense. This is a crucial point, as the strategic messages of the company must therefore resound in its process maps, in particular one that spans the processes of the complete business: the Enterprise Process Map. One simple example from Company X is shown below (see figure 3). Fig. 3: Company X Enterprise Process Map In reviewing Company X's Enterprise Process Map, one can immediately begin to understand the general strategic mindset of the organization. It shows that Company X is focused on its customers, defining 10 of its process areas belonging to customer-focused categories. Additionally, the organization views these end-customer-oriented process areas as part of customer-fulfilling value chains, while support process areas do not provide as much contiguous value. However, by including both support and strategic process categorizations, it becomes apparent that all processes are considered vital to the success of the customer-oriented focus processes. Below is an example from Company Y (see figure 4). Fig. 4: Company Y Enterprise Process Map Company Y, although also a customer-oriented company, sends a differently focused message with its depiction of the Enterprise Process Map. Along the top of the map is the company's product tree, overarching the process areas, which when executed deliver the products themselves. This indicates one strategic objective of excellence in product quality. Additionally, the view represents a less linear value chain, with strong overlaps of the various process areas. Marketing and quality management are seen as a key support processes, as they span the process lifecycle. Often, companies may incorporate graphics, logos and symbols representing customers and suppliers, and other objects to truly send the strategic message to the business. Other times, Enterprise Process Maps may show high level of responsibility to organizational units, or the application types that support the process areas. It is possible that hundreds of formats and focuses can be applied to an Enterprise Process Map. What is of vital importance, however, is which formats and focuses are chosen to truly represent the direction of the company, and serve as a driver for focusing the business on the strategic objectives set forth in that right. Process Categorization: Customer is Core In the previous two examples, processes were grouped using differing categories and techniques. Company X showed one support and three customer process categorizations using encompassing chevron objects; Customer Y achieved a less distinct categorization using a gradual color scheme. Either way, and in general, modeling of the process areas becomes even more valuable and easily understood within the context of business categorization, be it strategic or otherwise. But how one categorizes their processes is typically more complex than simply choosing object shapes and colors. Previously, it was stated that the ideal is a prescribed top-down approach to developing processes, to make certain linkages all the way back up to corporate strategy. But what about external influences? What forces push and pull corporate strategy? Industry maturity, product lifecycle, market profitability, competition, etc. can all drive the critical success factors of a particular business segment, or the company as a whole, in addition to previous corporate strategy. This may seem to be turning into a discussion of theory, but that is far from the case. In fact, in years of recent study and evolution of the way businesses operate, cross-industry and across the globe, one invariable has surfaced with such strength to make it undeniable in the game plan of any strategy fit for survival. That constant is the customer. Many of a company's critical success factors, in any business segment, relate to the customer: customer retention, satisfaction, loyalty, etc. Businesses serve customers, and so do a business's processes, mapped or unmapped. The most effective way to categorize processes is in a manner that visualizes convergence to what is core for a company. It is the value chain, beginning with the customer in mind, and ending with the fulfillment of that customer, that becomes the core or the centerpiece of the Enterprise Process Map. (See figure 5) Fig. 5: Company Z Enterprise Process Map Company Z has what may be viewed as several different perspectives or "cuts" baked into their Enterprise Process Map. It has divided its processes into three main categories (top, middle, and bottom) of Management Processes, the Core Value Chain and Supporting Processes. The Core category begins with Corporate Marketing (which contains the activities of beginning to engage customers) and ends with Customer Service Management. Within the value chain, this company has divided into the focus areas of their two primary business lines, Foods and Beverages. Does this mean that areas, such as Strategy, Information Management or Project Management are not as important as those in the Core category? No! In some cases, though, depending on the organization's understanding of high-level BPM concepts, use of category names, such as "Core," "Management" or "Support," can be a touchy subject. What is important to understand, is that no matter the nomenclature chosen, the Core processes are those that drive directly to customer value, Support processes are those which make the Core processes possible to execute, and Management Processes are those which steer and influence the Core. Some common terms for these three basic categorizations are Core, Customer Fulfillment, Customer Relationship Management, Governing, Controlling, Enabling, Support, etc. End-to-end versus Functional Processes Every high and low level of process: function, task, activity, process/work step (whatever an organization calls it), should add value to the flow of business in an organization. Suppose that within the process "Deliver package," there is a documented task titled "Stop for ice cream." It doesn't take a process expert to deduce the room for improvement. Though stopping for ice cream may create gain for the one person performing it, it likely benefits neither the organization nor, more importantly, the customer. In most cases, "Stop for ice cream" wouldn't make it past the first pass of To-Be process development. What would make the cut, however, would be a flow of tasks that, each having their own value add, build up to greater and greater levels of process objective. In this case, those tasks would combine to achieve a status of "package delivered." Figure 3 shows a simple example: Just as the package can only be delivered (outcome of the process) without first being retrieved, loaded, and the travel destination reached (outcomes of the process steps), some higher level of process "Play Practical Joke" (e.g., main process or process area) cannot be completed until a package is delivered. It seems that isolated or functionally separated processes, such as "Deliver Package" (shown in Figure 6), are necessary, but are always part of a bigger value chain. Each of these individual processes must be analyzed within the context of that value chain in order to ensure successful end-to-end process performance. For example, this company's "Create Joke Package" process could be operating flawlessly and efficiently, but if a joke is never developed, it cannot be created, so the end-to-end process breaks. Fig. 6: End to End Process Construction That being recognized, it is clear that processes must be viewed as end-to-end, customer-to-customer, and in the context of company strategy. But as can also be seen from the previous example, these vital end-to-end processes cannot be built without the functionally oriented building blocks. Without one, the other cannot be had, or at least not in a complete and organized fashion. As it turns out, but not discussed in depth here, the process modeling effort, BPM organizational development, and comprehensive coverage cannot be fully realized without a semi-functional, process-oriented approach. Then, an Enterprise Process Map should be concerned with both views, the building blocks, and access points to the business-critical end-to-end processes, which they construct. Without the functional building blocks, all streams of work needed for any business transformation would be lost mess of process disorganization. End-to-end views are essential for utilization in optimization in context, understanding customer impacts, base-lining all project phases and aligning objectives. Including both views on an Enterprise Process Map allows management to understand the functional orientation of the company's processes, while still providing access to end-to-end processes, which are most valuable to them. (See figures 7 and 8). Fig. 7: Simplified Enterprise Process Map with end-to-end Access Point The above examples show two unique ways to achieve a successful Enterprise Process Map. The first example is a simple map that shows a high level set of process areas and a separate section with the end-to-end processes of concern for the organization. This particular map is filtered to show just one vital end-to-end process for a project-specific focus. Fig. 8: Detailed Enterprise Process Map showing connected Functional Processes The second example shows a more complex arrangement and categorization of functional processes (the names of each process area has been removed). The end-to-end perspective is achieved at this level through the connections (interfaces at lower levels) between these functional process areas. An important point to note is that the organization of these two views of the Enterprise Process Map is dependent, in large part, on the orientation of its audience, and the complexity of the landscape at the highest level. If both are not apparent, the Enterprise Process Map is missing an opportunity to serve as a holistic, high-level view. Conclusion In the world of BPM, and specifically regarding Enterprise Process Maps, a picture can be worth as many words as the thought and effort that is put into it. Enterprise Process Maps alone cannot change an organization, but they serve more purposes than initially meet the eye, and therefore must be designed in a way that enables a BPM mindset, business process understanding and business transformation efforts. Every Enterprise Process Map will and should be different when looking across organizations. Its design will be driven by company strategy, a level of customer focus, and functional versus end-to-end orientations. This high-level description of the considerations of the Enterprise Process Maps is not a prescriptive "how to" guide. However, a company attempting to create one may not have the practical BPM experience to truly explore its options or impacts to the coming work of business process transformation. The biggest takeaway is that process modeling, at all levels, is a science and an art, and art is open to interpretation. It is critical that the modeler of the highest level of process mapping be a cognoscente of the message he is delivering and the factors at hand. Without sufficient focus on the design of the Enterprise Process Map, an entire BPM effort may suffer. For additional information please check: Oracle Business Process Management.

    Read the article

  • Projected Results

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Monica Mehta Yasser Mahmud has seen a revolution in project management over the past decade. During that time, the former Primavera product strategist (who joined Oracle when his company was acquired in 2008) has not only observed a transformation in the way IT systems support corporate projects but the role project portfolio management (PPM) plays in the enterprise. “15 years ago project management was the domain of project management office (PMO),” Mahmud recalls of earlier days. “But over the course of the past decade, we've seen it transform into a mission critical enterprise discipline, that has made Primavera indispensable in the board room. Now, as a senior manager, a board member, or a C-level executive you have direct and complete visibility into what’s kind of going on in the organization—at a level of detail that you're going to consume that information.” Now serving as Oracle’s vice president of product strategy and industry marketing, Mahmud shares his thoughts on how Oracle’s Primavera solutions have evolved and how best-in-class project portfolio management systems can help businesses stay competitive. Profit: What do you feel are the market dynamics that are changing project management today? Mahmud: First, the data explosion. We're generating data at twice the rate at which we can actually store it. The same concept applies for project-intensive organizations. A lot of data is gathered, but what are we really doing with it? Are we turning data into insight? Are we using that insight and turning it into foresight with analytics tools? This is a key driver that will separate the very good companies—the very competitive companies—from those that are not as competitive. Another trend is centered on the explosion of mobile computing. By the year 2013, an estimated 35 percent of the world’s workforce is going to be mobile. That’s one billion people. So the question is not if you're going to go mobile, it’s how fast you are going to go mobile. What kind of impact does that have on how the workforce participates in projects? What worked ten to fifteen years ago is not going to work today. It requires a real rethink around the interfaces and how data is actually presented. Profit: What is the role of project management in this new landscape? Mahmud: We recently conducted a PPM study with the Economist Intelligence Unit centered to determine how important project management is considered within organizations. Our target was primarily CFOs, CIOs, and senior managers and we discovered that while 95 percent of participants believed it critical to their business, only six percent were confident that projects were delivered on time and on budget. That’s a huge gap. Most organizations are looking for efficiency, especially in these volatile financial times. But senior management can’t keep track of every project in a large organization. As a result, executives are attempting to inventory the work being conducted under their watch. What is often needed is a very high-level assessment conducted at the board level to say, “Here are the 50 initiatives that we have underway. How do they line up with our strategic drivers?” This line of questioning can provide early warning that work and strategy are out of alignment; finding the gap between what the business needs to do and the actual performance scorecard. That’s low-hanging fruit for any executive looking to increase efficiency and save money. But it can only be obtained through proper assessment of existing projects—and you need a project system of record to get that done. Over the next decade or so, project management is going to transform into holistic work management. Business leaders will want make sure key projects align with corporate strategy, but also the ability to drill down into daily activity and smaller projects to make sure they line up as well. Keeping employees from working on tasks—even for a few hours—that don’t line up with corporate goals will, in many ways, become a competitive differentiator. Profit: How do all of these market challenges and shifting trends impact Oracle’s Primavera solutions and meeting customers’ needs? Mahmud: For Primavera, it’s a transformation from being a project management application to a PPM system in the enterprise. Also making that system a mission-critical application by connecting to other key applications within the ecosystem, such as the enterprise resource planning (ERP), supply chain, and CRM systems. Analytics have also become a huge component. Business analytics have made Oracle’s Primavera applications pertinent in the boardroom. Now, as a senior manager, a board member, a CXO, CIO, or CEO, you have direct visibility into what’s going on in the organization at a level that you're able to consume that information. In addition, all of this information pairs up really well with your financials and other data. Certainly, when you're an Oracle shop, you have that visibility that you didn’t have before from a project execution perspective. Profit: What new strategies and tools are being implemented to create a more efficient workplace for users? Mahmud: We believe very strongly that just because you call something an enterprise project portfolio management system doesn’t make it so—you have to get people to want to participate in the system. This can’t be mandated down from the top. It simply doesn’t work that way. A truly adoptable solution is one that makes it super easy for all types users to participate, by providing them interfaces where they live. Keeping that in mind, a major area of development has been alternative user interfaces. This is increasingly resulting in the creation of lighter weight, targeted interfaces such as iOS applications, and smartphones interfaces such as for iPhone and Android platform. Profit: How does this translate into the development of Oracle’s Primavera solutions? Mahmud: Let me give you a few examples. We recently announced the launch of our Primavera P6 Team Member application, which is a native iOS application for the iPhone. This interface makes it easier for team members to do their jobs quickly and effectively. Similarly, we introduced the Primavera analytics application, which can be consumed via mobile devices, and when married with Oracle Spatial capabilities, users can get a geographical view of what’s going on and which projects are occurring in various locations around the world. Lastly, we introduced advanced email integration that allows project team members to status work via E-mail. This functionality leverages the fact that users are in E-mail system throughout the day and allows them to status their work without the need to launch the Primavera application. It comes back to a mantra: provide as many alternative user interfaces as possible, so you can give people the ability to work, to participate, to raise issues, to create projects, in the places where they live. Do it in such a way that it’s non-intrusive, do it in such a way that it’s easy and intuitive and they can get it done in a short amount of time. If you do that, workers can get back to doing what they're actually getting paid for.

    Read the article

  • How do I use IImgCtx to get load an image with an alpha channel?

    - by fret
    I have some working code that uses IImgCtx to load images, but I can't work out how to get at the alpha channel. For images like .gif's and .png's there are transparent pixels, but using anything other than a 24-bit bitmap as a drawing surface doesn't work. For reference on the interface: http://www.codeproject.com/KB/graphics/JianImgCtxDecoder.aspx My code looks like this: IImgCtx *Ctx = 0; HRESULT hr = CoCreateInstance(CLSID_IImgCtx, NULL, CLSCTX_INPROC_SERVER, IID_IImgCtx, (LPVOID*)&Ctx); if (SUCCEEDED(hr)) { GVariant Fn = Name; hr = Ctx->Load(Fn.WStr(), 0); if (SUCCEEDED(hr)) { SIZE Size = { -1, -1 }; ULONG State = 0; while (true) { hr = Ctx->GetStateInfo(&State, &Size, false); if (SUCCEEDED(hr)) { if ((State & IMGLOAD_COMPLETE) || (State & IMGLOAD_STOPPED) || (State & IMGLOAD_ERROR)) { break; } else { LgiSleep(1); } } else break; } if (Size.cx > 0 && Size.cy > 0 && pDC.Reset(new GMemDC)) { if (pDC->Create(Size.cx, Size.cy, 32)) { HDC hDC = pDC->StartDC(); if (hDC) { RECT rc = { 0, 0, pDC->X(), pDC->Y() }; Ctx->Draw(hDC, &rc); pDC->EndDC(); } } else pDC.Reset(); } } Ctx->Release(); Where "StartDC" basically wraps CreateCompatibleDC(NULL) and "EndDC" wraps DeleteDC, with appropriate SelectObjects for the HBITMAPS etc. And pDC-Create(x, y, bit_depth) calls CreateDIBSection(...DIB_RGB_COLORS...). So it works if I create a 24 bits/pixel bitmap but has no alpha to speak of, and it leaves the 32 bits/pixel bitmap blank. Now this interface apparently is used by Internet Explorer to load images, and obviously THAT supports transparency, so I believe that it's possible to get some level of alpha out of the interface. The question is how? (I also have fall back code that will call libpng/libjpeg/my .gif loader etc)

    Read the article

  • Delivering SOA Governance with EAMS and Oracle Enterprise Repository by Link Consulting Team

    - by JuergenKress
    In the last 12 years Link Consulting has been making its presence in specific areas such as Governance and Architecture, both in terms of practices and methodologies, products, know-how and technological expertise. The Enterprise Architecture Management System - Oracle Enterprise Edition (EAMS - OER Edition) is the result of this experience and combines the architecture management solution with OER in order to deliver a product specialized for SOA Governance that gathers the better of two worlds in solution that enables SOA Governance projects, initiatives and programs. Enterprise Architecture Management System Enterprise Architecture Management System (EAMS), is an automation based solution that enables the efficient management of Enterprise Architectures. The solution uses configured enterprise repositories and takes advantages of its features to provide automation capabilities to the users. EAMS provides capabilities to create/customize/analyze repository data, architectural blueprints, reports and analytic charts. Oracle Enterprise Repository Oracle Enterprise Repository (OER) is one of the major and central elements of the Oracle SOA Governance solution. Oracle Enterprise Repository provides the tools to manage and govern the metadata for any type of software asset, from business processes and services to patterns, frameworks, applications, components, and models. OER maps the relationships and inter-dependencies that connect those assets to improve impact analysis, promote and optimize their reuse, and measure their impact on the bottom line. It provides the visibility, feedback, controls, and analytics to keep your SOA on track to deliver business value. The intense focus on automation helps to overcome barriers to SOA adoption and streamline governance throughout the lifecycle. Core capabilities of the OER include: Asset Management Asset Lifecycle Management Usage Tracking Service Discovery Version Management Dependency Analysis Portfolio Management EAMS - OER Edition The solution takes the advantages and features from both products and combines them in a symbiotic tool that enhances the quality of SOA Governance Initiatives and Programs. EAMS is able to produce a vast number of outputs by combining its analytical engine, SOA-specific configurations and the assets in OER and other related tools, catalogs and repositories. The configurations encompass not only the extendable parametrization of the metadata but also fully configurable blueprints, PowerPoint reports, charts and queries. The SOA blueprints The solution comes with a set of predefined architectural representations that help the organization better perceive their SOA landscape. More blueprints can be easily created in order to accommodate the organizations needs in terms of detail, audience and metadata. Charts & Dashboards The solution encompasses a set of predefined charts and dashboards that promote a more agile way to control and explore the assets. Time Based Visualization All representations are time bound, and with EAMS - OER you can truly govern SOA with a complete view of the Past, Present and Future; The solution delivers Gap Analysis, a project oriented approach while taking into consideration the As-Was, As-Is an To-Be. Time based visualization differentiating factors: Extensive automation and maintenance of architectural representations Organization wide solution. Easy access and navigation to and between all architectural artifacts and representations. Flexible meta-model, customization and extensibility capabilities. Lifecycle management and enforcement of the time dimension over all the repository content. Profile based customization. Comprehensive visibility Architectural alignment Friendly and striking user interfaces For more information on EAMS visit us here. For more information on SOA visit us here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Link Consulting,OER,OSR,SOA Governance,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Setting reply priority on Wifi network with QoS?

    - by Omega
    When using a Station and Access Point that support QoS over Wifi, is it possible to set the priority (= Traffic Identifier = TID = QoS channel) of the reply? For example when sending a ICMP ping request using a high priority (= QoS channel), is it possible to force the station to use that same priority (= QoS channel) when sending the reply? A related question: Is it possible to force the station into using a different QoS channel?

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >