Search Results

Search found 5064 results on 203 pages for 'automatic ref counting'.

Page 147/203 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • Ubuntu won't connect to wired network

    - by djeikyb
    I'm running 10.04, upgraded from 9.10, maybe, but probably not upgraded from 9.04. I have two wifi routers. Zeus is connected to the dsl modem. Hermes uses a wds bridge with Zeus to extend the network. My desktop (Daedalus) is ethernetted to Hermes. My laptop (Clyde) is wifi, switching to Hermes or Zeus as needed. Occasionally, as in whenever I transfer a large file from desktop to laptop, the wds bridge will die. Fixing it means restarting both routers, though it seems Hermes should boot first. This is ridiculous, and eventually I'll get around to asking you guys to help me stop it from happening. More importantly is that my desktop requires a reboot to get back on the network. WTF. ifconfig shows my nic has no ip. /etc/init.d/networking restart doesn't do anything, not even give me a lousy ip. dhcpcd eth1 grants me an ip address, but doesn't help with internet access. route -n shows what looks like my normal routing table, but pinging google.com informs me it's an unknown host. jake@daedalus:~$ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 10.1.1.1 0.0.0.0 UG 0 0 0 eth1 It may be worth noting that I can ping both Zeus (10.1.1.1) and Hermes (10.1.1.4) and my laptop (10.1.1.55). Much obliged for any help. Rebooting is, well, trivial in this instance. But it's stupid. I switched to linux because I like the idea that if one part breaks, you fix it instead of reboot reboot reboot. I've left my poor desktop in disarray, confining myself to my little netbook. My desktop is broken, awaiting magical commands from you brilliant folk. (and yes, i know clyde the netbook should be named icarus. it was its original name. ironically the ssd burned out, and i felt it wasn't right when it came to reinstalling)

    Read the article

  • Web Apps vs Web Services: 302s and 401s are not always good Friends

    - by Your DisplayName here!
    It is not very uncommon to have web sites that have web UX and services content. The UX part maybe uses WS-Federation (or some other redirect based mechanism). That means whenever an authorization error occurs (401 status code), this is picked by the corresponding redirect module and turned into a redirect (302) to the login page. All is good. But in services, when you emit a 401, you typically want that status code to travel back to the client agent, so it can do error handling. These two approaches conflict. If you think (like me) that you should separate UX and services into separate apps, you don’t need to read on. Just do it ;) If you need to mix both mechanisms in a single app – here’s how I solved it for a project. I sub classed the redirect module – this was in my case the WIF WS-Federation HTTP module and modified the OnAuthorizationFailed method. In there I check for a special HttpContext item, and if that is present, I suppress the redirect. Otherwise everything works as normal: class ServiceAwareWSFederationAuthenticationModule : WSFederationAuthenticationModule {     protected override void OnAuthorizationFailed(AuthorizationFailedEventArgs e)     {         base.OnAuthorizationFailed(e);         var isService = HttpContext.Current.Items[AdvertiseWcfInHttpPipelineBehavior.DefaultLabel];         if (isService != null)         {             e.RedirectToIdentityProvider = false;         }     } } Now the question is, how do you smuggle that value into the HttpContext. If it is a MVC based web service, that’s easy of course. In the case of WCF, one approach that worked for me was to set it in a service behavior (dispatch message inspector to be exact): public void BeforeSendReply( ref Message reply, object correlationState) {     if (HttpContext.Current != null)     {         HttpContext.Current.Items[DefaultLabel] = true;     } } HTH

    Read the article

  • WebCenter Customer Spotlight: Marvel

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryMarvel Entertainment, LLC (Marvel) is one of the world's most prominent character-based entertainment companies, built on a proven library of over 8,000 characters featured in a variety of media over seventy years. The customer wanted to optimize their brand licensing process, so Marvel worked with Oracle WebCenter partner Fishbowl Solutions and implemented a centralized Content Hub based on Oracle WebCenter Content. The 100% web based secure Intranet/Partner Extranet solution is now managing the entire life cycle of the brand licensing process. Marvel and their brand licensees have  now complete visibility of brand license operations including the history of approval request and related content.  Company OverviewMarvel Entertainment, LLC (Marvel) a wholly-owned subsidiary of The Walt Disney Company, is one of the world's most prominent character-based entertainment companies, built on a proven library of over 8,000 characters featured in a variety of media over seventy years.  Marvel utilizes its character franchises in entertainment, licensing and publishing.   Sample  characters:    - Spider-Man    - Iron Man    - Captain America    - X-MEN    - Thor    - Avengers    - And a host of others  Business ChallengesMarvel wanted to optimize their brand licensing process for their characters and had following business requirements : Facilitating content worldwide Scalable and flexible infrastructure to manage multiple content types and huge file sizes Optimize the licensing process workflow trough automatic notifications, tracking reviews, issuing approvals, etc. Solution DeployedMarvel worked with Oracle WebCenter partner Fishbowl Solutions and implemented a centralized Content Hub based on Oracle WebCenter Content. The 100% web based secure Intranet/Partner Extranet solution is now managing the entire life cycle of the brand licensing process. The internal users can now manage all digital assets related to a character trough proper categorization of all items, workflow based review and approval of branding styles and a powerful search and retrieval service. The licensees of Marvel brands can now online develop and submit  concepts and prototypes which are reviewed and approved using a collaborative process. Business ResultMarvel and their brand licensees have now complete visibility of brand license operations including the history of approval request and related content. The character brand related content is now in the right place, at the right time at the user's fingertips with highly improved quality. Additional Information Marvel Open World Presentation Oracle WebCenter Content

    Read the article

  • WiFi connected to router, but no internet connection

    - by Quetzacotl
    I just got a new notebook, a ThinkPad Edge E530, and installed Ubuntu on it. I'm pretty new to Ubuntu. On the same laptop, running Windows 7, the Wi-Fi connection works fine. Ethernet connection works both on Win7 and on Ubuntu. Only Wi-Fi on Ubuntu does not work; it connects to the Wi-Fi access point but I don't have Internet access. My wireless card is Intel Centrino Wireless-N 2230. What can fix the problem? EDIT: ifconfig -a eth0 Link encap:Ethernet HWaddr b8:88:e3:30:72:34 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:43 Base address:0x8000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:163 errors:0 dropped:0 overruns:0 frame:0 TX packets:163 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:10124 (10.1 KB) TX bytes:10124 (10.1 KB) usb0 Link encap:Ethernet HWaddr 02:15:e0:ec:01:00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 68:5d:43:43:71:e1 inet addr:192.168.2.101 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::6a5d:43ff:fe43:71e1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:40 errors:0 dropped:0 overruns:0 frame:0 TX packets:220 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2801 (2.8 KB) TX bytes:26230 (26.2 KB) route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 wlan0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0 192.168.2.0 0.0.0.0 255.255.255.0 U 2 0 0 wlan0 cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 127.0.0.1 iwconfig lo no wireless extensions. usb0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"SATELITE" Mode:Managed Frequency:2.462 GHz Access Point: 00:1F:1F:8D:CC:08 Bit Rate=1 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=60/70 Signal level=-50 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:93 Invalid misc:243 Missed beacon:0 eth0 no wireless extensions.

    Read the article

  • Bridging 10GbE with 12.04 - bridging works but the bridging computer has no internet access

    - by Donal
    I have been trying to get 12.04 bridging working with two 10GbE cards. I have 2 10GbE cards in a linux box being used only for this bridge, 1 with 2 10GbaseT ports and another with a single CX4 port. I have 2 client computers connected with 10GbaseT cards and the CX4 card connects to a procurve switch. I can get the bridging happening mostly the way that I want, The clients receive dhcp information from the dhcp server (not the bridging machine) and can connect to and properly see the rest of the network. Speeds are ok, not amazing but working on that is another matter. My problem is that the bridging machine has no internet access ... meaning I can't update anything or apt-get anything It can ping all other machines on the local network. I've tried the helpful hints from: https://help.ubuntu.com/community/NetworkConnectionBridge "Enabling Internet Use on the Bridging Computer" and get the following RTNETLINK answers: File exists but dhclient br0 does nothing for me :( I think if it is anything it a multiple route problem as both br0 and eth4 have ipaddresses ... even though I have only set it up so that br0 has one ... Bridge setup details: /etc/network/interface auto br0 iface br0 inet static address 192.168.0.246 netmask 255.255.255.0 gateway 192.168.0.1 broadcast 192.168.0.255 dns-nameservers 192.168.0.1 dns-search example.com dns-domain example.com #(eth2 & eth3 are the 10GbaseT) #(eth4 is the CX4 connection) pre-up ip link set eth2 down pre-up ip link set eth3 down pre-up ip link set eth4 down pre-up brctl addbr br0 pre-up brctl addif br0 eth4 eth3 eth2 pre-up ip addr flush dev eth3 pre-up ip addr flush dev eth2 pre-up ip addr flush dev eth4 post-down ip link set eth4 down post-down ip link set eth2 down post-down ip link set eth3 down post-down ip link set br0 down post-down brctl delif br0 eth2 eth3 eth4 post-down brctl delbr br0 ifconfig -a br0 Link encap:Ethernet HWaddr 00:15:17:22:20:34 inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::215:17ff:fe22:2034/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4957 errors:0 dropped:0 overruns:0 frame:0 TX packets:1077 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:596320 (596.3 KB) TX bytes:139952 (139.9 KB) eth4 Link encap:Ethernet HWaddr 00:60:dd:47:7c:05 inet addr:192.168.0.57 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::260:ddff:fe47:7c05/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:15391 errors:0 dropped:51 overruns:0 frame:0 TX packets:1207 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5916769 (5.9 MB) TX bytes:154312 (154.3 KB) Interrupt:70 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth4 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 br0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 br0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth4

    Read the article

  • Registering domain during christmas holydays

    - by arkascha
    One of the domain names I tried to register previously has been blocked by a domain grabber two days prior to my own attempt. That was about 1 year ago. The attempt to buy the domain from that person failed due to a totally exaggerated price. So I dropped the issue and watched the domain (offered at sedo.com). As expected there were no more offers, the domain was not sold. Now I learn from the whois database that the registration of that domain name ends on 25.12.2012 (christmas holyday). This raises two questions for me, I fail to find reliable answers on the internet. So maybe someone experienced here can drop a statement or a hint: is it reasonable that the domain name in question really will be free again when that date mentioned in the whois database up to when the domain is registered has passed? I certainly know that the registration can be prolonged, that is not what I mean. I expect (hope) that that domain grabber does not extend the registration, since it costs money and effort and he failed to sell the domain. Provided this is the case and the domain registration is not prolonged, is that date mentioned reliable? Or might it just be some 'default' date? I would like to try to register that domain name as soon as it is unregistered. Since that domain grabber registered that domain only two days before my own registration attempt I would like to prevent such annoying interference next time. So I ask myself: is it possible to register a domain name on a holyday? I mean not to send an email to my provider to do so on that day or before, but to actually have to process taking place as not to wait for 1-2 days after the unregistration? My own provider which I am very happy with does not offer such service on a holyday (which is perfectly understandable). They are 'still checking' if they can offer something automatic. I researched and did not find an answer to the question if that is possible at all. Is an autoomatic registration attempt on a holyday possible? Where can I do that? Is that reliable? Thanks for any reply!

    Read the article

  • Java JRE 1.6.0_35 Certified with Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    The latest Java Runtime Environment 1.6.0_35 (a.k.a. JRE 6u35-b10) is now certified with Oracle E-Business Suite Release 11i and 12 desktop clients.   What's new in Java 1.6.0_35?See the 1.6.0_35 Update Release Notes for details about what has changed in this release.  This release is available for download from the usual Sun channels and through the 'Java Automatic Update' mechanism. 32-bit and 64-bit versions certified This certification includes both the 32-bit and 64-bit JRE versions. 32-bit JREs are certified on: Windows XP Service Pack 3 (SP3) Windows Vista Service Pack 1 (SP1) and Service Pack 2 (SP2) Windows 7 and Windows 7 Service Pack 1 (SP1) 64-bit JREs are certified only on 64-bit versions of Windows 7 and Windows 7 Service Pack 1 (SP1). Worried about the 'mismanaged session cookie' issue? No need to worry -- it's fixed.  To recap: JRE releases 1.6.0_18 through 1.6.0_22 had issues with mismanaging session cookies that affected some users in some circumstances. The fix for those issues was first included in JRE 1.6.0_23. These fixes will carry forward and continue to be fixed in all future JRE releases.  In other words, if you wish to avoid the mismanaged session cookie issue, you should apply any release after JRE 1.6.0_22.All JRE 1.6 releases are certified with EBS upon release Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline.  We test all new JRE 1.6 releases in parallel with the JRE development process, so all new JRE 1.6 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 releases to your EBS users' desktops. Important For important guidance about the impact of the JRE Auto Update feature on JRE 1.6 desktops, see: URGENT BULLETIN: All E-Business Suite End-Users Must Manually Apply JRE 6 Updates References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 1: Overview

    - by Your DisplayName here!
    A lot has been written already about passive federation and integration of WIF and ADFS 2 into web apps. The whole active/WS-Trust feature area is much less documented or covered in articles and blogs. Over the next few posts I will try to compile all relevant information about the above topics – but let’s start with an overview. ADFS 2 has a number of endpoints under the /services/trust base address that implement the WS-Trust protocol. They are grouped by the WS-Trust version they support (/13 and /2005), the client credential type (/windows*, /username*, /certificate*) and the security mode (*transport, *mixed and message). You can see the endpoints in the MMC console under the Service/Endpoints page. So in other words, you use one of these endpoints (which exactly depends on your configuration / system setup) to request tokens from ADFS 2. The bindings behind the endpoints are more or less standard WCF bindings, but with SecureConversation (establishSecurityContext) disabled. That means that whenever you need to programmatically talk to these endpoints – you can (easily) create client bindings that are compatible. Another option is to use the special bindings that come with WIF (in the Microsoft.IdentityModel.Protocols.WSTrust.Bindings namespace). They are already pre-configured to be compatible with the ADFS endpoints. The downside of these bindings is, that you can’t use them in configuration. That’s definitely a feature request of mine for the next version of WIF. The next important piece of information is the so called Federation Service Identifier. This is the value that you (at least by default) have to use as a realm/appliesTo whenever you are requesting a token for ADFS (e.g. in  IdP –> RSTS scenario). Or (even more) technically speaking, ADFS 2 checks for this value in the audience URI restriction in SAML tokens. You can get to this value by clicking the “Edit Federation Service Properties” in the MMC when the Service tree-node is selected. OK – I will come back to this basic information in the following posts. Basically I want to go through the following scenarios: ADFS in the IdP role ADFS in the R-STS role (with a chained claims provider) Using the WCF bindings for automatic token issuance Using WSTrustChannelFactory for manual token handling Stay tuned…

    Read the article

  • My Automated NuGet Workflow

    - by Wes McClure
    When we develop libraries (whether internal or public), it helps to have a rapid ability to make changes and test them in a consuming application. Building Setup the library with automatic versioning and a nuspec Setup library assembly version to auto increment build and revision AssemblyInfo –> [assembly: AssemblyVersion("1.0.*")] This autoincrements build and revision based on time of build Major & Minor Major should be changed when you have breaking changes Minor should be changed once you have a solid new release During development I don’t increment these Create a nuspec, version this with the code nuspec - set version to <version>$version$</version> This uses the assembly’s version, which is auto-incrementing Make changes to code Run automated build (ruby/rake) run “rake nuget” nuget task builds nuget package and copies it to a local nuget feed I use an environment variable to point at this so I can change it on a machine level! The nuget command below assumes a nuspec is checked in called Library.nuspec next to the csproj file $projectSolution = 'src\\Library.sln' $nugetFeedPath = ENV["NuGetDevFeed"] msbuild :build => [:clean] do |msb| msb.properties :configuration => :Release msb.targets :Build msb.solution = $projectSolution end task :nuget => [:build] do sh "nuget pack src\\Library\\Library.csproj /OutputDirectory " + $nugetFeedPath end Setup the local nuget feed as a nuget package source (this is only required once per machine) Go to the consuming project Update the package Update-Package Library or Install-Package TLDR change library code run “rake nuget” run “Update-Package library” in the consuming application build/test! If you manually execute any of this process, especially copying files, you will find it a burden to develop the library and will find yourself dreading it, and even worse, making changes downstream instead of updating the shared library for everyone’s sake. Publishing Once you have a set of changes that you want to release, consider versioning and possibly increment the minor version if needed. Pick the package out of your local feed, and copy it to a public / shared feed! I have a script to do this where I can drop the package on a batch file Replace apikey with your nuget feed's apikey Take out the confirm(s) if you don't want them @ECHO off echo Upload %1? set /P anykey="Hit enter to continue " nuget push %1 apikey set /P anykey="Done " Note: helps to prune all the unnecessary versions during testing from your local feed once you are done and ready to publish TLDR consider version number run command to copy to public feed

    Read the article

  • Is 2 lines of push/pop code for each pre-draw-state too many?

    - by Griffin
    I'm trying to simplify vector graphics management in XNA; currently by incorporating state preservation. 2X lines of push/pop code for X states feels like too many, and it just feels wrong to have 2 lines of code that look identical except for one being push() and the other being pop(). The goal is to eradicate this repetitiveness,and I hoped to do so by creating an interface in which a client can give class/struct refs in which he wants restored after the rendering calls. Also note that many beginner-programmers will be using this, so forcing lambda expressions or other advanced C# features to be used in client code is not a good idea. I attempted to accomplish my goal by using Daniel Earwicker's Ptr class: public class Ptr<T> { Func<T> getter; Action<T> setter; public Ptr(Func<T> g, Action<T> s) { getter = g; setter = s; } public T Deref { get { return getter(); } set { setter(value); } } } an extension method: //doesn't work for structs since this is just syntatic sugar public static Ptr<T> GetPtr <T> (this T obj) { return new Ptr<T>( ()=> obj, v=> obj=v ); } and a Push Function: //returns a Pop Action for later calling public static Action Push <T> (ref T structure) where T: struct { T pushedValue = structure; //copies the struct data Ptr<T> p = structure.GetPtr(); return new Action( ()=> {p.Deref = pushedValue;} ); } However this doesn't work as stated in the code. How might I accomplish my goal? Example of code to be refactored: protected override void RenderLocally (GraphicsDevice device) { if (!(bool)isCompiled) {Compile();} //TODO: make sure state settings don't implicitly delete any buffers/resources RasterizerState oldRasterState = device.RasterizerState; DepthFormat oldFormat = device.PresentationParameters.DepthStencilFormat; DepthStencilState oldBufferState = device.DepthStencilState; { //Rendering code } device.RasterizerState = oldRasterState; device.DepthStencilState = oldBufferState; device.PresentationParameters.DepthStencilFormat = oldFormat; }

    Read the article

  • Should I go back to college and graduate with a poor GPA or try to jump into an entry-level development position? [closed]

    - by jshin47
    I once attended a top-10 American university but I am currently not in school for several different reasons. Chief among them is that I did very poorly two semesters and even failed one of them (got two F's) which put me in automatic suspension. My major is not CS but math. I am in a pickle at the moment. After I was suspended I got a job at a niche IT company in the area. I am employed as something of an IT generalist; my primary responsibilities are Windows systems administration/networking but I also do some Android, iOS, and .NET development. I have released a few apps to the app store under my name and my company's name, and we have done work for a few big clients. I started working at my job about 1.5 years ago and I am somewhat happily employed but I do not see it as a long-term fit because it is a small company with little opportunity to advance. I would like to move out to California and particularly to the Bay Area to get a job at a more reputable or exciting company, even at a lower rate of pay, but I am not sure if I should do that or try to go back to school. If I went back to school, it would take 1-1.5 years to graduate and some $. Best case scenario I would graduate with a 2.9 or 3.0 GPA. It is a top-10 school, but that's a crappy GPA. If I do not go back to school, I will be a field where most people have degrees, without a degree. If anything goes wrong I could be really screwed as I feel I will get no respect without a degree. On the other hand I really would like to get started in the field and get more serious about developing good development practices, learning new languages/frameworks, and working with people who know a lot more than I so I can learn and grow as a developer and eventually do my own thing. Basically, I am wondering: Should I just go back to school? How much does the bad GPA / good school reputation weigh in? What about the fact that I am a Math major and not a CS major (have never taken a CS course)? Does my skill set as something of a generalist bode well for me finding work at a start up in the Bay Area? If not (2), should I hunker down and focus on producing a really good (or a few medicore) iOS apps? Android apps? etc... How would you look at someone who did great in HS, kind of goofed off in college and eventually quit, and got into development? Thanks for any thoughts or input.

    Read the article

  • Java JRE 1.6.0_37 Certified with Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    My apologies: this certification announcement got lost in the OpenWorld maelstorm.  Better late than never. The section below entitled, "All JRE 1.6 releases are certified with EBS upon release" should obviate the need for these announcements, but I know that people have gotten used to seeing these certifications referenced explicitly.  The latest Java Runtime Environment 1.6.0_37 (a.k.a. JRE 6u37-b06) is now certified with Oracle E-Business Suite Release 11i and 12 desktop clients.   What's new in Java 1.6.0_37?See the 1.6.0_37 Update Release Notes for details about what has changed in this release.  This release is available for download from the usual Sun channels and through the 'Java Automatic Update' mechanism. 32-bit and 64-bit versions certified This certification includes both the 32-bit and 64-bit JRE versions. 32-bit JREs are certified on: Windows XP Service Pack 3 (SP3) Windows Vista Service Pack 1 (SP1) and Service Pack 2 (SP2) Windows 7 and Windows 7 Service Pack 1 (SP1) 64-bit JREs are certified only on 64-bit versions of Windows 7 and Windows 7 Service Pack 1 (SP1). Worried about the 'mismanaged session cookie' issue? No need to worry -- it's fixed.  To recap: JRE releases 1.6.0_18 through 1.6.0_22 had issues with mismanaging session cookies that affected some users in some circumstances. The fix for those issues was first included in JRE 1.6.0_23. These fixes will carry forward and continue to be fixed in all future JRE releases.  In other words, if you wish to avoid the mismanaged session cookie issue, you should apply any release after JRE 1.6.0_22.All JRE 1.6 releases are certified with EBS upon release Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline.  We test all new JRE 1.6 releases in parallel with the JRE development process, so all new JRE 1.6 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 releases to your EBS users' desktops. Important For important guidance about the impact of the JRE Auto Update feature on JRE 1.6 desktops, see: Planning Bulletin for JRE 7: What EBS Customers Can Do Today References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • OOF checklist

    - by Daniel Moth
    When going on vacation or otherwise being out of office (known as OOF in Microsoft), it is polite and professional that our absence creates the minimum disruption possible to the rest of the business, and especially our colleagues. Below is my OOF checklist - I try to do these as soon as I know I'll be OOF, rather than leave it for the night before. Let the relevant folks on the team know the planned dates of absence and check if anybody was expecting something from you during that timeframe. Reset expectations with them, and as applicable try to find another owner for individual activities that cannot wait. Go through your calendar for the OOF period and decline every meeting occurrence so the owner of the meeting knows that you won't be attending (similar to my post about responding to invites). If it is your meeting cancel it so that people don’t turn up without the meeting organizer being there. Do this even for meetings were the folks should know due to step #1. Over-communicating is a good thing here and keeps calendars all around up to date. Enter your OOF dates in whatever tool your company uses. Typically that is the notification to your manager. In your Outlook calendar, create a local Appointment (don't invite anyone) for the date range (All day event) setting the "Show As" dropdown to "Out of Office". This way, people won’t try to schedule meetings with you on that day. If you use Lync, set the status to "Off Work" for that period. If you won't be responding to email (which when on your vacation you definitely shouldn't) then in Outlook setup "Automatic Replies (Out of Office)" for that period. This way people won’t think you are rude when not replying to their emails. In your OOF message point to an alias (ideally of many people) as a fallback for urgent queries. If you want to proactively notify individuals of your OOFage then schedule and send a multi-day meeting request for the entire period. Remember to set the "Show As" to "Free" (so their calendar doesn’t show busy/oof to others), set the "Reminder" to "None" (so they don’t get a reminder about it), set "Low Importance", and uncheck both "Response Options" so if they don't want this on their calendar, it is just one click for them to get rid of it. Aside: I have another post with advice on sending invites. If you care about people who would not observe the above but could drop by your office, stick a physical OOF note at your office door or chair/monitor or desk. Have I missed any? Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Best way to remote restart Ubuntu from Windows machine

    - by robsoft
    Background: I'm looking to put a series of Ubuntu machines into retail locations, they're being used as dumb kiosks to show a series of slides onto large LCD panel TV screens. Once installed, they won't have a keyboard or mouse connected but will have a fixed IP on the local network. Everything is configured to auto-start, no automatic updates, no power saving etc - I think we're pretty-much good to go apart from one thing. I need the retail staff to be able to restart the boxes if a problem arises. We have VNC running (now that we've turned off desktop enhancements!) so that we can remotely get into the machines if we need to, but that's not something we would allow the retail staff to do. The machines are going to be physically 'out of the way' (probably in the ceiling space) so the power button is not easily accessible!. I'd like to have some means of allowing the retail staff to restart the Ubuntu machine, from the desktop of one of their Windows terminals. I don't really want to give them some kind of raw terminal access (the command line will frighten them!) and I don't want them to use VNC (as stated above). Ideally there would be an icon on the Windows desktop, they double-click it, reply to a simple 'are you sure?' prompt, and then the Ubuntu box is told to restart. The Windows side of that won't be a problem, we can write something using Delphi, Python & Qt4, whatever - it's the Ubuntu side of it I'm stuck with. Out of sight/view, could I have a Windows program open a terminal across the network and tell Ubuntu to restart? Is this what SSH could be used for (I have never set that kind of thing up). The Windows programming side isn't really an issue, it's just that I'm a total Ubuntu noob and don't know where to start from the platform point of view. The other thing we considered is also having the machine automatically restart itself at a set time each day (obviously out of store hours!). To me, that seems a bit unnecessary (though forcing a restart once a week/month might be worthwhile). Any thoughts or suggestions? Being able to restart the box on demand across the network is my prime requirement.

    Read the article

  • Unable to ping inside or outside network with default gateway 0.0.0.0

    - by agentroadkill
    I've been around here before and I could usually piece together everything to more or less get myself up and running, but this time I'm truly stumped. I'm trying to connect my new 14.04 install to a network, and I'm forced to be behind my college's router. Now I've tested the vary cable that is right now plugged into my Ubuntu box on a Windows, Mac OS X, and even my friend's Ubuntu 14.04 box, and they all connect no problem. I've been trying to track this down for about two days, but every time I get close to it, the bug jumps to some other piece of my connection. Anyway, as it sits ifconfig -a gives: eth2 Lninkencap:Ethernet HWaddr:00:1f:bc:08:31:1d inet addr:10.32.51.51 Bcast:10.32.51.155 Mask: 255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 RX bytes:0 TX bytes:0 as well as the local loopback, but I'm assuming that is not an issue here. sudo dhclient -v eth2 returns: Listening on LPF/<hardware address of my integrated NIC, above> Sending on <same> Sending on Socket/fallback DHCPREQUEST of 10.32.51.51 on eth2 to 255.255.255.255 port 67 (xid=0x6f4a66ba) <two more lines of same> DHCPDISCOVER on eth2 to 255.255.255.255 port 67 interval 3 (xid=0x156f9fb4) <many more of above with varying intervals> No DHCPOFFERS received. Trying recorded lease 10.32.51.51 RTNETLINK answers: File exists bound: renewal in <large number> seconds If I then try ping 8.8.8.8, I get: connect: Network is unreachable /etc/resolv.conf only contains the two lines telling you not to edit it, while /etc/network/interfaces only has the loopback interface block in it. I've tried commenting out the "option rfc3442" line in /etc/dhcp/dhclient.conf which seemed to fix this issue for many people, as well as adding the line send vendor-class-indentifier "MSFT5.0" to dhclient.conf as well to tell the router I'm a windows box, in case they don't like Linux. Finally, route -n reveals: Destination Gateway Genmask Flags Metric Ref Use Iface 10.32.51.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 I would like to apologize in advance for the doubtless butchered text alignment, but I'm obviously typing this all by hand, reading from the terminal as I type commands. I'm hoping this is an interesting problem, and not something I blithely stumbled past in my (apparent) over-confidence. TIA! Quick addendum before posting: The activity light on the ethernet port are lit and one blinks during boot, but they rarely (and seemingly randomly) do so afterwards (both are dark) even while running dhclient in the foreground. When I had the Ubuntu box tethered to my MacBook earlier, I got what looked like a normal power/uplink blinking pattern, but was unable to ping one from the other.

    Read the article

  • Does the method of adjustment matter, or just the final calibration?

    - by Steve
    A company produces software (and hardware) that is used to both perform automatic adjustments on electronic test equipment as well as perform calibrations of the same equipment. The results of the calibrations are put onto a certificate of calibration that is sent to the customer along with the equipment. This calibration certificate states various conditions of the calibration, such as what hardware (models/serial numbers) and software (version) was used to perform the calibration, as well as things like environmental conditions, etc. Making the assumption that the software used to produce the data (and listed on the calibration certificate) used on the certificate of calibration must have gone through a "test/release" process and must be considered "released" software - does this also mean that the software used for adjustment must also be released? I believe that the method (software/environmental conditions/etc) used or present during adjustment doesn't matter, all that really matters is the end result of the calibration, the conditions present during the calibration, and whether or not the equipment was within the specifications. The real question I'm hoping to get answered: Is there a reputable source (e.g. NIST or somewhere similar) that addresses this question? (I have searched...) The thinking is that during high volume production runs, the "unreleased" system can be used to perform adjustments, as long as a released system is used to perform the calibrations, since the time required to perform the adjustments is much longer than the calibration. This unreleased system will eventually become released for use, but currently is not. Also, please not that there is a distinction between "adjustment" and "calibration". The definition from BIPM International vocabulary of metrology, 2.39: Operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties (of the calibrated instrument or secondary standard) and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication. Followed by NOTE 2 (emphasis in original text): Calibration should not be confused with adjustment of a measuring system, often mistakenly called "self-calibration", nor with verification of calibration As a side note, I'm not sure why this got down voted. It's regarding software and it's use before and after release for use. I believe there is a best practice that can be applied and this is (hopefully) not primarily opinion based.

    Read the article

  • Oracle BPM and Open Data integration development

    - by drrwebber
    Rapidly developing Oracle BPM application solutions with data source integration previously required significant Java and JDeveloper skills. Now using open source tools for open data development significantly reduces the coding needed.  Key tasks can be performed with visual drag and drop designing combined with menu selections entry and automatic form generation directly from XSD schema definitions. The architecture used is extremely lightweight, portable, open platform and scalable allowing integration with a variety of Oracle and non-Oracle data sources and systems. Two videos available on YouTube walk through the process at both an introductory conceptual level and then a deep dive into the programming needed using JDeveloper, Oracle BPM composer and Oracle WLS (WebLogic Server) along with the CAM editor and Open-XDX open source tools. Also available are coding samples and resources from the GitHub project page, along with working online demonstration resources on the VerifyXML site. Combining Oracle BPM with these open source tools provides a comprehensive simple and elegant solution set. Development times are slashed and rapid prototyping is enabled. Also existing data sources can be integrated using open data formats with either XML or JSON along with CRUD accessing via the Open-XDX Java component. The Open-XDX tool is a code-free approach where data mapping is configured as templates using visual drag and drop in the CAM Editor open source tool.  XML or JSON is then automatically generated or processed (output or input) and appropriate SQL statements created to support the data accessing.   Also included is the ability to integrate with fillable PDF forms via the XML templates and the Java PDF form filling library.  Again minimal Java coding is needed to associate the XML source content with the PDF named fields.  The Oracle BPM forms can be automatically generated from XSD schema definitions that are built from the data mapping templates.  This dramatically simplifies development work as all the integration artifacts needed are created by the open source editor toolset. The developer level video is designed as a tutorial with segments, hands-on demonstrations and reviews.  This allows developers to learn the techniques and approaches used in incremental steps. The intended audience ranges from data analysts to developers and assumes only entry level Java skills and knowledge.  Most actions are menu driven while Java coding is limited to simply configuring values and parameters along with performing builds and deployments from JDeveloper and Oracle WLS.   Additional existing Oracle online training resources can be referenced on Oracle BPM and WLS that cover other normal delivery aspects such as user management and application deployment.

    Read the article

  • returning a heap block by reference in c++

    - by basicR
    I was trying to brush up my c++ skills. I got 2 functions: concat_HeapVal() returns the output heap variable by value concat_HeapRef() returns the output heap variable by reference When main() runs it will be on stack,s1 and s2 will be on stack, I pass the value by ref only and in each of the below functions, I create a variable on heap and concat them. When concat_HeapVal() is called it returns me the correct output. When concat_HeapRef() is called it returns me some memory address (wrong output). Why? I use new operator in both the functions. Hence it allocates on heap. So when I return by reference, heap will still be VALID even when my main() stack memory goes out of scope. So it's left to OS to cleanup the memory. Right? string& concat_HeapRef(const string& s1, const string& s2) { string *temp = new string(); temp->append(s1); temp->append(s2); return *temp; } string* concat_HeapVal(const string& s1, const string& s2) { string *temp = new string(); temp->append(s1); temp->append(s2); return temp; } int main() { string s1,s2; string heapOPRef; string *heapOPVal; cout<<"String Conact Experimentations\n"; cout<<"Enter s-1 : "; cin>>s1; cout<<"Enter s-2 : "; cin>>s2; heapOPRef = concat_HeapRef(s1,s2); heapOPVal = concat_HeapVal(s1,s2); cout<<heapOPRef<<" "<<heapOPVal<<" "<<endl; return -9; }

    Read the article

  • Looking for recommendations for a server-side newsletter program

    - by Sparky672
    Hello- I'm currently using a server-side SQL based mailing list program called Php-List on multiple sites and it works fairly well. But installation and setup is quite cumbersome, quirky and the interface is not well organized... neither is the code... with pieces all over the place in random fashion. Customizing the "look & feel" and full site integration are both tedious and painful. Upgrading the version is made more complex since multiple edits need to be manually transferred each time. Also, probably due to a poor English translation, descriptions and instructions within certain areas of the user interface are contradictory and unclear. You just have to play with it and remember what you did last time it worked. It's supposed to be so my customers can send out their own newsletters... after supplying a written tutorial, about half of them seem to stumble through it okay and the other half just hire me to do it for them. So not quite easy enough for most average people to use. I'm looking for something that's as easy for them as using a blog or discussion forum. It also must be easier to set up and integrate into a site than Php-List. I have no problem getting dirty and writing CSS or HTML by hand. Nor do I have any problem editing the program code. Perhaps what I'm looking for is a solution that is more organized, a better GUI, and template or "skin" based. Therefore, if I spend many hours customizing a skin, I can simply update the program and re-use my custom skin without having to reproduce the tedious setup over and over. (I currently maintain a list of about 25 things I must manually edit or add to multiple files in multiple directories each time I install or upgrade Php-List) A great example of what I'm looking for is very much like WordPress or phpBB. They're both easy to install and customize yet powerful and packed full of features. They're also VERY well organized making customization less painful. So enough yammering for now... anyone know of something, besides Php-List, with many of the same features as Php-List; maintaining a mailing list with a server-side database, custom sign-up pages, automatic opt-in opt-out, allowing custom HTML newsletter templates, etc? Thank-you!

    Read the article

  • SMTP POP3 & PST. Acronyms from Hades.

    - by mikef
    A busy SysAdmin will occasionally have reason to curse SMTP. It is, certainly, one of the strangest events in the history of IT that such a deeply flawed system, designed originally purely for campus use, should have reached its current dominant position. The explanation was that it was the first open-standard email system, so SMTP/POP3 became the internet standard. We are, in consequence, dogged with a system with security weaknesses so extreme that messages are sent in plain text and you have no real assurance as to who the message came from anyway (SMTP-AUTH hasn't really caught on). Even without the security issues, the use of SMTP in an office environment provides a management nightmare to all commercial users responsible for complying with all regulations that control the conduct of business: such as tracking, retaining, and recording company documents. SMTP mail developed from various Unix-based systems designed for campus use that took the mail analogy so literally that mail messages were actually delivered to the users, using a 'store and forward' mechanism. This meant that, from the start, the end user had to store, manage and delete messages. This is a problem that has passed through all the releases of MS Outlook: It has to be able to manage mail locally in the dreaded PST file. As a stand-alone system, Outlook is flawed by its neglect of any means of automatic backup. Previous Outlook PST files actually blew up without warning when they reached the 2 Gig limit and became corrupted and inaccessible, leading to a thriving industry of 3rd party tools to clear up the mess. Microsoft Exchange is, of course, a server-based system. Emails are less likely to be lost in such a system if it is properly run. However, there is nothing to stop users from using local PSTs as well. There is the additional temptation to load emails into mobile devices, or USB keys for off-line working. The result is that the System Administrator is faced by a complex hybrid system where backups have to be taken from Servers, and PCs scattered around the network, where duplication of emails causes storage issues, and document retention policies become impossible to manage. If one adds to that the complexity of mobile phone email readers and mail synchronization, the problem is daunting. It is hardly surprising that the mood darkens when SysAdmins meet and discuss PST Hell. If you were promoted to the task of tormenting the souls of the damned in Hades, what aspects of the management of Outlook would you find most useful for your task? I'd love to hear from you. Cheers, Michael

    Read the article

  • Security Alert for CVE-2012-4681 Released

    - by Eric P. Maurice
    Hi, this is Eric Maurice again! Oracle has just released Security Alert CVE-2012-4681 to address 3 distinct but related vulnerabilities and one security-in-depth issue affecting Java running in desktop browsers.  These vulnerabilities are: CVE-2012-4681, CVE-2012-1682, CVE-2012-3136, and CVE-2012-0547.  These vulnerabilities are not applicable to standalone Java desktop applications or Java running on servers, i.e. these vulnerabilities do not affect any Oracle server based software. Vulnerabilities CVE-2012-4681, CVE-2012-1682, and CVE-2012-3136 have each received a CVSS Base Score of 10.0.  This score assumes that the affected users have administrative privileges, as is typical in Windows XP.  Vulnerability CVE-20120-0547 has received a CVSS Base Score of 0.0 because this vulnerability is not directly exploitable in typical user deployments, but Oracle has issued a security-in-depth fix for this issue as it can be used in conjunction with other vulnerabilities to significantly increase the overall impact of a successful exploit. If successfully exploited, these vulnerabilities can provide a malicious attacker the ability to plant discretionary binaries onto the compromised system, e.g. the vulnerabilities can be exploited to install malware, including Trojans, onto the targeted system.  Note that this malware may in some instances be detected by current antivirus signatures upon its installation.  Due to the high severity of these vulnerabilities, Oracle recommends that customers apply this Security Alert as soon as possible.  Furthermore, note that the technical details of these vulnerabilities are widely available on the Internet and Oracle has received external reports that these vulnerabilities are being actively exploited in the wild.    Developers should download the latest release at http://www.oracle.com/technetwork/java/javase/downloads/index.html   Java users should download the latest release of JRE at http://java.com, and of course   Windows users can take advantage of the Java Automatic Update to get the latest release. For more information: The Advisory for Security Alert CVE-2012-4681 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2012-4681-1835715.html  Users can verify that they’re running the most recent version of Java by visiting: http://java.com/en/download/installed.jsp    Instructions on removing older (and less secure) versions of Java can be found at http://java.com/en/download/faq/remove_olderversions.xml   

    Read the article

  • Java @Contented annotation to help reduce false sharing

    - by Dave
    See this posting by Aleksey Shipilev for details -- @Contended is something we've wanted for a long time. The JVM provides automatic layout and placement of fields. Usually it'll (a) sort fields by descending size to improve footprint, and (b) pack reference fields so the garbage collector can process a contiguous run of reference fields when tracing. @Contended gives the program a way to provide more explicit guidance with respect to concurrency and false sharing. Using this facility we can sequester hot frequently written shared fields away from other mostly read-only or cold fields. The simple rule is that read-sharing is cheap, and write-sharing is very expensive. We can also pack fields together that tend to be written together by the same thread at about the same time. More generally, we're trying to influence relative field placement to minimize coherency misses. Fields that are accessed closely together in time should be placed proximally in space to promote cache locality. That is, temporal locality should condition spatial locality. Fields accessed together in time should be nearby in space. That having been said, we have to be careful to avoid false sharing and excessive invalidation from coherence traffic. As such, we try to cluster or otherwise sequester fields that tend to written at approximately the same time by the same thread onto the same cache line. Note that there's a tension at play: if we try too hard to minimize single-threaded capacity misses then we can end up with excessive coherency misses running in a parallel environment. Theres no single optimal layout for both single-thread and multithreaded environments. And the ideal layout problem itself is NP-hard. Ideally, a JVM would employ hardware monitoring facilities to detect sharing behavior and change the layout on the fly. That's a bit difficult as we don't yet have the right plumbing to provide efficient and expedient information to the JVM. Hint: we need to disintermediate the OS and hypervisor. Another challenge is that raw field offsets are used in the unsafe facility, so we'd need to address that issue, possibly with an extra level of indirection. Finally, I'd like to be able to pack final fields together as well, as those are known to be read-only.

    Read the article

  • Wifi problems after upgrading to 13.10

    - by Simon
    I just upgraded to Ubuntu 13.10, but since the upgrade I don't have internet access via wifi anymore. I can: See networks Connect to a network Ping myself (localhost, 192.168.0.103) I can't: Ping others (including other devices on the same wireless network, including the gateway/router) Resolve hosts Access any other external resource, whether on my own network or on the internet Using Wireshark, I noticed my computer is continuously sending ARP-requests like "Who has 192.168.0.1 [which is the gateway]? Tell 192.168.0.103". It doesn't get any replies though. When I ping another IP-address for which it knows the mac-address (from cache), it turns out a packet loss of 90% occurs, and even if a packet manages to arrive it takes around 3000ms. The output of route -n is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth1 192.168.0.0 0.0.0.0 255.255.255.0 U 9 0 0 eth1 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 Before upgrading, wifi worked fine. Using other devices, wifi still works fine.Resetting the router didn't help. Ethernet still works after upgrading. Any suggestions? Update: I'm using the wl driver. Here's the relevant output of some commands: lspci | grep Wireless 03:00.0 Network controller: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter (rev 01) cat /etc/modprobe.d/blacklist.conf [...] blacklist mac80211 blacklist brcm80211 blacklist cfg80211 blacklist lib80211_crypt_tkip blacklist lib80211 blacklist b43 cat /etc/rc.local sudo modprobe -r lib80211 sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_wep.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_tkip.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_ccmp.ko sudo modprobe wl exit 0 The last lines are probably how I got wireless working after the previous upgrade (wireless has been a problem after each upgrade). Update 2: added information about the exact hardware below. The hardware is an integrated device, so I ran lspci -nn | grep -i network. The output is: 03:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter [14e4:4727] (rev 01)

    Read the article

  • Strategy for avoiding duplicate object ids for data shared across devices using iCloud

    - by rmaddy
    I have a data intensive iOS app that is not using CoreData nor does it support iCloud synching (yet). All of my objects are created with unique keys. I use a simple long long initialized with the current time. Then as I need a new key I increment the value by 1. This has all worked well for a few years with the app running isolated on a single device. Now I want to add support for automatic data sync across devices using iCloud. As my app is written, there is the possibility that two objects created on two different devices could end up with the same key. I need to avoid this possibility. I'm looking for ideas for solving this issue. I have a few requirements that the solution must meet: 1) The key needs to remain a single integral data type. Converting all existing keys to a compound key or to a string or other type would affect the entire code base and likely result in more bugs than it's worth. 2) The solution can't depend on an Internet connection. A user must be able to run the app and add data even with no Internet connection. The data should still resolve properly later when the data syncs through iCloud once a connection is available. I'll accept one exception to this rule. If no other option is available, I may be open to requiring an Internet connection the first time the app's data is initialized. One idea I have been toying around with in my head is logically splitting the integer key into two parts. The high 4 or 5 bits could be used as some sort of device id while the rest represents the actual key. The fuzzy part is figuring out how to come up with non-conflicting device ids that fit in a few bits. This should be viable since I don't need to deal will millions of devices. I just need to deal with the few devices that would be shared by a given iCloud account. I'm open to suggestions. Thanks.

    Read the article

  • How to avoid oscillation by async event based systems?

    - by inf3rno
    Imagine a system where there are data sources which need to be kept in sync. A simple example is model - view data binding by MVC. Now I intend to describe these kind of systems with data sources and hubs. Data sources are publishing and subscribing for events and hubs are relaying events to data sources. By handling an event a data source will change it state described in the event. By publishing an event the data source puts its current state to the event, so other data sources can use that information to change their state accordingly. The only problem with this system, that events can be reflected from the hub or from the other data sources, and that can put the system into an infinite oscillation (by async or infinite loop by sync). For example A -- data source B -- data source H -- hub A -> H -> A -- reflection from the hub A -> H -> B -> H -> A -- reflection from another data source By sync it is relatively easy to solve this issue. You can compare the current state with the event, and if they are equal, you don't change the state and raise the same event again. By async I could not find a solution yet. The state comparison does not work by async event handling because there is eventual consistency, and new events can be published in an inconsistent state causing the same oscillation. For example: A(*->x) -> H -> B(y->x) -- can go parallel with B(*->y) -> H -> A(x->y) -- so first A changes to x state while B changes to y state -- then B changes to x state while A changes to y state -- and so on for eternity... What do you think is there an algorithm to solve this problem? If there is a solution, is it possible to extend it to prevent oscillation caused by multiple hubs, multiple different events, etc... ? update: I don't think I can make this work without a lot of effort. I think this problem is just the same as we have by syncing multiple databases in a distributed system. So I think what I really need is constraints if I want to prevent this problem in an automatic way. What constraints do you suggest?

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >