Search Results

Search found 2826 results on 114 pages for 'somebody'.

Page 107/114 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Why does DebugActiveProcessStop crash my debugging app?

    - by SparkyNZ
    I have a debugging program which I've written to attach to a process and create a crash dump file. That part works fine. The problem I have is that when the debugger program terminates, so does the program that it was debugging. I did some Googling and found the DebugActiveProcessStop() API call. This didn't show up in my older MSDN documentation as it was only introduced in Windows XP so I've tried loading it dynamicall from Kernel32.dll at runtime. Now my problem is that my debugger program crashes as soon as the _DebugActiveProcessStop() call is made. Can somebody please tell me what I'm doing wrong? typedef BOOL (*DEBUGACTIVEPROCESSSTOP)(DWORD); DEBUGACTIVEPROCESSSTOP _DebugActiveProcessStop; HMODULE hK32 = LoadLibrary( "kernel32.dll" ); if( hK32 ) _DebugActiveProcessStop = (DEBUGACTIVEPROCESSSTOP) GetProcAddress( hK32,"DebugActiveProcessStop" ); else { printf( "Can't load Kernel32.dll\n" ); return; } if( ! _DebugActiveProcessStop ) { printf( "Can't find DebugActiveProcessStop\n" ); return; } ... void DebugLoop( void ) { DEBUG_EVENT de; while( 1 ) { WaitForDebugEvent( &de, INFINITE ); switch( de.dwDebugEventCode ) { case CREATE_PROCESS_DEBUG_EVENT: hProcess = de.u.CreateProcessInfo.hProcess; break; case EXCEPTION_DEBUG_EVENT: // PDS: I want a crash dump immediately! dwProcessId = de.dwProcessId; dwThreadId = de.dwThreadId; WriteCrashDump( &de.u.Exception ); return; case CREATE_THREAD_DEBUG_EVENT: case OUTPUT_DEBUG_STRING_EVENT: case EXIT_THREAD_DEBUG_EVENT: case EXIT_PROCESS_DEBUG_EVENT : case LOAD_DLL_DEBUG_EVENT: case UNLOAD_DLL_DEBUG_EVENT: case RIP_EVENT: default: break; } ContinueDebugEvent( de.dwProcessId, de.dwThreadId, DBG_CONTINUE ); } } ... void main( void ) { ... BOOL bo = DebugActiveProcess( dwProcessId ); if( bo == 0 ) printf( "DebugActiveProcess failed, GetLastError: %u \n",GetLastError() ); hProcess = OpenProcess( PROCESS_ALL_ACCESS, TRUE, dwProcessId ); if( hProcess == NULL ) printf( "OpenProcess failed, GetLastError: %u \n",GetLastError() ); DebugLoop(); _DebugActiveProcessStop( dwProcessId ); CloseHandle( hProcess ); }

    Read the article

  • NHibernateUnitOfWork + ASP.Net MVC

    - by Felipe
    Hi Guys, hows it going? I'm in my first time with DDD, so I'm begginer! So, let's take it's very simple :D I developed an application using asp.net mvc 2 , ddd and nhibernate. I have a domain model in a class library, my repositories in another class library, and an asp.net mvc 2 application. My Repository base class, I have a construct that I inject and dependency (my unique ISessionFactory object started in global.asax), the code is: public class Repository<T> : IRepository<T> where T : Entidade { protected ISessionFactory SessionFactory { get; private set; } protected ISession Session { get { return SessionFactory.GetCurrentSession(); } } protected Repository(ISessionFactory sessionFactory) { SessionFactory = sessionFactory; } public void Save(T entity) { Session.SaveOrUpdate(entity); } public void Delete(T entity) { Session.Delete(entity); } public T Get(long key) { return Session.Get<T>(key); } public IList<T> FindAll() { return Session.CreateCriteria(typeof(T)).SetCacheable(true).List<T>(); } } And After I have the spefic repositories, like this: public class DocumentRepository : Repository<Domain.Document>, IDocumentRepository { // constructor public DocumentRepository (ISessionFactory sessionFactory) : base(sessionFactory) { } public IList<Domain.Document> GetByType(int idType) { var result = Session.CreateQuery("from Document d where d.Type.Id = :IdType") .SetParameter("IdType", idType) .List<Domain.Document>(); return result; } } there is not control of transaction in this code, and it's working fine, but, I would like to make something to control this repositories in my controller of asp.net mvc, something simple, like this: using (var tx = /* what can I put here ? */) { try { _repositoryA.Save(objA); _repositoryB.Save(objB); _repositotyC.Delete(objC); /* ... others tasks ... */ tx.Commit(); } catch { tx.RollBack(); } } I've heared about NHibernateUnitOfWork, but i don't know :(, How Can I configure NHibernateUnitOfWork to work with my repositories ? Should I change the my simple repository ? Sugestions are welcome! So, thanks if somebody read to here! If can help me, I appretiate! PS: Sorry for my english! bye =D

    Read the article

  • What is GC holes?

    - by tianyi
    I wrote a long TCP connection socket server in C#. Spike in memory in my server happens. I used dotNet Memory Profiler(a tool) to detect where the memory leaks. Memory Profiler indicates the private heap is huge, and the memory is something like below(the number is not real,what I want to show is the GC0 and GC2's Holes are very very huge, the data size is normal): Managed heaps - 1,500,000KB Normal heap - 1400,000KB Generation #0 - 600,000KB Data - 100,000KB "Holes" - 500,000KB Generation #1 - xxKB Data - 0KB "Holes" - xKB Generation #2 - xxxxxxxxxxxxxKB Data - 100,000KB "Holes" - 700,000KB Large heap - 131072KB Large heap - 83KB Overhead/unused - 130989KB Overhead - 0KB Howerver, what is GC hole? I read an article about the hole: http://kaushalp.blogspot.com/2007/04/what-is-gc-hole-and-how-to-create-gc.html The author said : The code snippet below is the simplest way to introduce a GC hole into the system. //OBJECTREF is a typedef for Object*. { PointerTable *pTBL = o_pObjectClass->GetPointerTable(); OBJECTREF aObj = AllocateObjectMemory(pTBL); OBJECTREF bObj = AllocateObjectMemory(pTBL); //WRONG!!! “aObj” may point to garbage if the second //“AllocateObjectMemory” triggered a GC. DoSomething (aOb, bObj); } All it does is allocate two managed objects, and then does something with them both. This code compiles fine, and if you run simple pre-checkin tests, it will probably “work.” But this code will crash eventually. Why? If the second call to “AllocateObjectMemory” triggers a GC, that GC discards the object instance you just assigned to “aObj”. This code, like all C++ code inside the CLR, is compiled by a non-managed compiler and the GC cannot know that “aObj” holds a root reference to an object you want kept live. ======================================================================== I can't understand what he explained. Does the sample mean aObj becomes a wild pointer after GC? Is it mean { aObj = (*aObj)malloc(sizeof(object)); free(aObj); function(aObj);? } ? I hope somebody can explain it.

    Read the article

  • Very simple test view in MonoTouch draws a line using Core Graphics but view content is not shown

    - by Krumelur
    Hi, I give up now on this very simple test I've been trying to run. I want to add a subview to my window which does nothing but draw a line from one corner of the iPhone's screen to the other and then, using touchesMoved() it is supposed to draw a line from the last to the current point. The issues: 1. Already the initial line is not visible. 2. When using Interface Builder, the initial line is visible, but drawRect() is never called, even if I call SetNeedsDisplay(). It can't be that hard...can somebody fix the code below to make it work? In main.cs in FinishedLaunching(): oView = new TestView(); oView.AutoresizingMask = UIViewAutoresizing.FlexibleWidth | UIViewAutoresizing.FlexibleHeight; oView.Frame = new System.Drawing.RectangleF(0, 0, 320, 480); window.AddSubview(oView); window.MakeKeyAndVisible (); The TestView.cs: using System; using MonoTouch.UIKit; using MonoTouch.CoreGraphics; using System.Drawing; using MonoTouch.CoreAnimation; using MonoTouch.Foundation; namespace Test { public class TestView : UIView { public TestView () : base() { } public override void DrawRect (RectangleF area, UIViewPrintFormatter formatter) { CGContext oContext = UIGraphics.GetCurrentContext(); oContext.SetStrokeColor(UIColor.Red.CGColor.Components); oContext.SetLineWidth(3.0f); this.oLastPoint.Y = UIScreen.MainScreen.ApplicationFrame.Size.Height - this.oLastPoint.Y; this.oCurrentPoint.Y = UIScreen.MainScreen.ApplicationFrame.Size.Height - this.oCurrentPoint.Y; oContext.StrokeLineSegments(new PointF[] {this.oLastPoint, this.oCurrentPoint }); oContext.Flush(); oContext.RestoreState(); Console.Out.WriteLine("Current X: {0}, Y: {1}", oCurrentPoint.X.ToString(), oCurrentPoint.Y.ToString()); Console.Out.WriteLine("Last X: {0}, Y: {1}", oLastPoint.X.ToString(), oLastPoint.Y.ToString()); } private PointF oCurrentPoint = new PointF(0, 0); private PointF oLastPoint = new PointF(320, 480); public override void TouchesMoved (MonoTouch.Foundation.NSSet touches, UIEvent evt) { base.TouchesMoved (touches, evt); UITouch oTouch = (UITouch)touches.AnyObject; this.oCurrentPoint = oTouch.LocationInView(this); this.oLastPoint = oTouch.PreviousLocationInView(this); this.SetNeedsDisplay(); } } }

    Read the article

  • NHibernate does not update entity when repository is passed by constructor

    - by Alex
    Hi everybody, I am developing with NHibernate for the first time in conjunction with ASP.NET MVC and StructureMap. The CodeCampServer serves as a great example for me. I really like the different concepts which were implemented there and I can learn a lot from it. In my controllers I use Constructur Dependency Injection to get an instance of the specific repository needed. My problem is: If I change an attribute of the customer the customer's data is not updated in the database, although Commit() is called on the transaction object (by a HttpModule). public class AccountsController : Controller { private readonly ICustomerRepository repository; public AccountsController(ICustomerRepository repository) { this.repository = repository; } public ActionResult Save(Customer customer) { Customer customerToUpdate = repository .GetById(customer.Id); customerToUpdate.GivenName = "test"; //<-- customer does not get updated in database return View(); } } On the other hand this is working: public class AccountsController : Controller { [LoadCurrentCustomer] public ActionResult Save(Customer customer) { customer.GivenName = "test"; //<-- Customer gets updated return View(); } } public class LoadCurrentCustomer : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { const string parameterName = "Customer"; if (filterContext.ActionParameters.ContainsKey(parameterName)) { if (filterContext.HttpContext.User.Identity.IsAuthenticated) { Customer CurrentCustomer = DependencyResolverFactory .GetDefault() .Resolve<IUserSession>() .GetCurrentUser(); filterContext.ActionParameters[parameterName] = CurrentCustomer; } } base.OnActionExecuting(filterContext); } } public class UserSession : IUserSession { private readonly ICustomerRepository repository; public UserSession(ICustomerRepository customerRepository) { repository = customerRepository; } public Customer GetCurrentUser() { var identity = HttpContext.Current.User.Identity; if (!identity.IsAuthenticated) { return null; } Customer customer = repository.GetByEmailAddress(identity.Name); return customer; } } I also tried to call update on the repository like the following code shows. But this leads to an NHibernateException which says "Illegal attempt to associate a collection with two open sessions". Actually there is only one. public ActionResult Save(Customer customer) { Customer customerToUpdate = repository .GetById(customer.Id); customer.GivenName = "test"; repository.Update(customerToUpdate); return View(); } Does somebody have an idea why the customer is not updated in the first example but is updated in the second example? Why does NHibernate say that there are two open sessions?

    Read the article

  • DFS Backtracking with java

    - by Cláudio Ribeiro
    I'm having problems with DFS backtracking in an adjacency matrix. Here's my code: (i added the test to the main in case someone wants to test it) public class Graph { private int numVertex; private int numEdges; private boolean[][] adj; public Graph(int numVertex, int numEdges) { this.numVertex = numVertex; this.numEdges = numEdges; this.adj = new boolean[numVertex][numVertex]; } public void addEdge(int start, int end){ adj[start-1][end-1] = true; adj[end-1][start-1] = true; } List<Integer> visited = new ArrayList<Integer>(); public Integer DFS(Graph G, int startVertex){ int i=0; if(pilha.isEmpty()) pilha.push(startVertex); for(i=1; i<G.numVertex; i++){ pilha.push(i); if(G.adj[i-1][startVertex-1] != false){ G.adj[i-1][startVertex-1] = false; G.adj[startVertex-1][i-1] = false; DFS(G,i); break; }else{ visited.add(pilha.pop()); } System.out.println("Stack: " + pilha); } return -1; } Stack<Integer> pilha = new Stack(); public static void main(String[] args) { Graph g = new Graph(6, 9); g.addEdge(1, 2); g.addEdge(1, 5); g.addEdge(2, 4); g.addEdge(2, 5); g.addEdge(2, 6); g.addEdge(3, 4); g.addEdge(3, 5); g.addEdge(4, 5); g.addEdge(6, 4); g.DFS(g, 1); } } I'm trying to solve the euler path problem. the program solves basic graphs but when it needs to backtrack, it just does not do it. I think the problem might be in the stack manipulations or in the recursive dfs call. I've tried a lot of things, but still can't seem to figure out why it does not backtrack. Can somebody help me ?

    Read the article

  • is it possible that a greasemonkey script can work on one computer but not on another?

    - by plastic cloud
    i'm writing an greasemonkey script for somebody else. he is a moderator and i am not. and the script will help him do some moderating things. now the script works for me. as far as it can work for me.(as i am not a mod) but even those things that work for me are not working for him.. i checked his version of greasemonkey plugin and firefox and he is up to date. only thing that's really different is that i'm on a mac and he is pc, but i wouldn't think that would be any problem. this is one of the functions that is not working for him. he does gets the first and third GM_log message. but not the second one ("got some(1) .."). kmmh.trackNames = function(){ GM_log("starting to get names from the first "+kmmh.topAmount+" page(s) from leaderboard."); kmmh.leaderboardlist = []; for (var p=1; p<=(kmmh.topAmount); p++){ var page = "http://www.somegamesite.com/leaderboard?page="+ p; var boardHTML = ""; dojo.xhrGet({ url: page, sync: true, load: function(response){ boardHTML = response; GM_log("got some (1) => "+boardHTML.length); }, handleAs: "text" }); GM_log("got some (2) => "+boardHTML.length); //create dummy div and place leaderboard html in there var dummy = dojo.create('div', { innerHTML: boardHTML }); //search through it var searchN = dojo.query('.notcurrent', dummy).forEach(function(node,index){ if(index >= 10){ kmmh.leaderboardlist.push(node.textContent); // add names to array } }); } GM_log("all names from "+ kmmh.topAmount +" page(s) of leaderboard ==> "+ kmmh.leaderboardlist); does anyone have any idea what could be causing this ?? EDIT: i know i had to write according to what he would see on his mod screen. so i asked him to copy paste source of pages and so on. and besides that, this part of the script is not depending on being a mod or not. i got everything else working for him. just this function still doesn't on neither of his pc's.

    Read the article

  • What is wrong with this C++ Code ?

    - by mr.bio
    Hi .. i am a beginner and i have a problem : this code doesnt compile : main.cpp: #include <stdlib.h> #include "readdir.h" #include "mysql.h" #include "readimage.h" int main(int argc, char** argv) { if (argc>1){ readdir(argv[1]); // test(); return (EXIT_SUCCESS); } std::cout << "Bitte Pfad angeben !" << std::endl ; return (EXIT_FAILURE); } readimage.cpp #include <Magick++.h> #include <iostream> #include <vector> using namespace Magick; using namespace std; void readImage(std::vector<string> &filenames) { for (unsigned int i = 0; i < filenames.size(); ++i) { try { Image img("binary/" + filenames.at(i)); for (unsigned int y = 1; y < img.rows(); y++) { for (unsigned int x = 1; x < img.columns(); x++) { ColorRGB rgb(img.pixelColor(x, y)); // cout << "x: " << x << " y: " << y << " : " << rgb.red() << endl; } } cout << "done " << i << endl; } catch (Magick::Exception & error) { cerr << "Caught Magick++ exception: " << error.what() << endl; } } } readimage.h #ifndef _READIMAGE_H #define _READIMAGE_H #include <Magick++.h> #include <iostream> #include <vector> #include <string> using namespace Magick; using namespace std; void readImage(vector<string> &filenames) #endif /* _READIMAGE_H */ If want to compile it with this code : g++ main.cpp Magick++-config --cflags --cppflags --ldflags --libs readimage.cpp i get this error message : main.cpp:5: error: expected initializer before ‘int’ i have no clue , why ? :( Can somebody help me ? :)

    Read the article

  • NSUrlconnection problem receiving data from some filehosts

    - by Tammo
    hello again, i am trying to develop an downloadmanager. i can now download files from almost anywhere on linkclick. in the - (BOOL)webView:(UIWebView*)webView shouldStartLoadWithRequest:(NSURLRequest*)request navigationType:(UIWebViewNavigationType)navigationType i check if the url is a url to a binaryfile like a zipfile. than i setup a nsurlconnection NSMutableURLRequest *urlRequest = [NSMutableURLRequest requestWithURL:url cachePolicy:NSURLRequestReloadIgnoringLocalCacheData timeoutInterval:20.0]; [urlRequest setValue:@"User-Agent" forHTTPHeaderField:@"Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en) AppleWebKit/418.9 (KHTML, like Gecko) Safari/419.3"]; NSURLConnection *mainConnection = [NSURLConnection connectionWithRequest:urlRequest delegate:self]; if (nil == mainConnection) { NSLog(@"Could not create the NSURLConnection object"); } (void)connection:(NSURLConnection )connection didReceiveResponse:(NSURLResponse)response { self.tabBarController.selectedIndex=1; [receivedData setLength:0]; percent = 0; localFilename = [[[url2 absoluteString] lastPathComponent] copy]; NSLog(localFilename); NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0] ; NSString *appFile = [documentsDirectory stringByAppendingPathComponent:localFilename]; [[NSFileManager defaultManager] createFileAtPath:appFile contents:nil attributes:nil]; [downloadname setHidden:NO]; [downloadname setText:localFilename]; expectedBytes = [response expectedContentLength]; exp = [response expectedContentLength]; NSLog(@"content-length: %lli Bytes", expectedBytes); file = [[NSFileHandle fileHandleForUpdatingAtPath:appFile] retain]; if (file) { [file seekToEndOfFile]; } } (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { if (file) { [file seekToEndOfFile]; } [file writeData:data]; [receivedData appendData:data]; long long resourceLength = [receivedData length]; float res = [receivedData length]; percent = res/exp; [progress setHidden:NO]; [progress setProgress:percent]; NSLog(@"Remaining: %lli KB", (expectedBytes-resourceLength)/1024); [kbleft setHidden:NO]; [kbleft setText:[NSString stringWithFormat:@"%lli / %lli KB", expectedBytes/1024 ,(resourceLength)/1024]]; } in the connectiondidfinish loading i close the file. all working fine for nearly every hoster except hosters wich have a capture procedure before like filedude.com in the uiwebview i can surf to the downloadpage enter the captcha and get the downloadlink. when i click on it the file will be created in the documentsdir with the filename and the download starts but he dont get any data. every file has 0kb and the NSLog(@"content-length: %lli Bytes", expectedBytes); gives out something like 100-400 byte . can somebody help me solve this problem? kind regards

    Read the article

  • Benchmark of Java Try/Catch Block

    - by hectorg87
    I know that going into a catch block has some significance cost when executing a program, however, I was wondering if entering a try{} block also had any impact so I started looking for an answer in google with many opinions, but no benchmarking at all. Some answers I found were: Java try/catch performance, is it recommended to keep what is inside the try clause to a minimum? Try Catch Performance Java Java try catch blocks However they didn't answer my question with facts, so I decided to try it for myself. Here's what I did. I have a csv file with this format: host;ip;number;date;status;email;uid;name;lastname;promo_code; where everything after status is optional and will not even have the corresponding ; , so when parsing a validation has to be done to see if the value is there, here's where the try/catch issue came to my mind. The current code that in inherited in my company does this: StringTokenizer st=new StringTokenizer(line,";"); String host = st.nextToken(); String ip = st.nextToken(); String number = st.nextToken(); String date = st.nextToken(); String status = st.nextToken(); String email = ""; try{ email = st.nextToken(); }catch(NoSuchElementException e){ email = ""; } and it repeats what it's done for email with uid, name, lastname and promo_code. and I changed everything to: if(st.hasMoreTokens()){ email = st.nextToken(); } and in fact it performs faster. When parsing a file that doesn't have the optional columns. Here are the average times: --- Trying:122 milliseconds --- Checking:33 milliseconds however, here's what confused me and the reason I'm asking: When running the example with values for the optional columns in all 8000 lines of the CSV, the if() version still performs better than the try/catch version, so my question is Does really the try block does not have any performance impact on my code? The average times for this example are: --- Trying:105 milliseconds --- Checking:43 milliseconds Can somebody explain what's going on here? Thanks a lot

    Read the article

  • Suggestions In Porting ASP.NET to MVC.NET - Is storing SiteConfiguration in Cache RESTful?

    - by DaveDev
    I've been tasked with porting/refactoring a Web Application Platform that we have from ASP.NET to MVC.NET. Ideally I could use all the existing platform's configurations to determine the properties of the site that is presented. Is it RESTful to keep a SiteConfiguration object which contains all of our various page configuration data in the System.Web.Caching.Cache? There are a lot of settings that need to be loaded when the user acceses our site so it's inefficient for each user to have to load the same settings every time they access. Some data the SiteConfiguration object contains is as follows and it determines what Master Page / site configuration / style / UserControls are available to the client, public string SiteTheme { get; set; } public string Region { private get; set; } public string DateFormat { get; set; } public string NumberFormat { get; set; } public int WrapperType { private get; set; } public string LabelFileName { get; set; } public LabelFile LabelFile { get; set; } // the following two are the heavy ones // PageConfiguration contains lots of configuration data for each panel on the page public IList<PageConfiguration> Pages { get; set; } // This contains all the configurations for the factsheets we produce public List<ConfiguredFactsheet> ConfiguredFactsheets { get; set; } I was thinking of having a URL structure like this: www.MySite1.com/PageTemplate/UserControl/ the domain determines the SiteConfiguration object that is created, where MySite1.com is SiteId = 1, MySite2.com is SiteId = 2. (and in turn, style, configurations for various pages, etc.) PageTemplate is the View that will be rendered and simply defines a layout for where I'm going to inject the UserControls Can somebody please tell me if I'm completely missing the RESTful point here? I'd like to refactor the platform into MVC because it's better to work in but I want to do it right but with a minimum of reinventing-the-wheel because otherwise it won't get approval. Any suggestions otherwise? Thanks

    Read the article

  • Self referencing a table

    - by mue
    Hello, so I'm new to NHibernate and have a problem. Perhaps somebody can help me here. Given a User-class with many, many properties: public class User { public virtual Int64 Id { get; private set; } public virtual string Firstname { get; set; } public virtual string Lastname { get; set; } public virtual string Username { get; set; } public virtual string Email { get; set; } ... public virtual string Comment { get; set; } public virtual UserInfo LastModifiedBy { get; set; } } Here some DDL for the table: CREATE TABLE USERS ( "ID" BIGINT NOT NULL , "FIRSTNAME" VARCHAR(50) NOT NULL , "LASTNAME" VARCHAR(50) NOT NULL , "USERNAME" VARCHAR(128) NOT NULL , "EMAIL" VARCHAR(128) NOT NULL , ... "LASTMODIFIEDBY" BIGINT NOT NULL , ) IN "USERSPACE1" ; Database-table-field 'LASTMODIFIEDBY' holds for auditing purposes the Id from the User who is acting in case of inserts or updates. This would normally be an admin. Because the UI shall display not this Int64 but admins name (pattern like 'Lastname, Firstname') I need to retrieve these values by self referencing table USERS to itself. Next is, that a whole object of type User would be overkill by the amount of unwanted fields. So there is a class UserInfo with much smaller footprint. public class UserInfo { public Int64 Id { get; set; } public string Firstname { get; set; } public string Lastname { get; set; } public string FullnameReverse { get { return string.Format("{0}, {1}", Lastname ?? string.Empty, Firstname ?? string.Empty); } } } So here starts the problem. Actually I have no clue how to accomplish this task. Im not sure if I also must provide a mapping for class UserInfo and not only for class User. I'd like to integrate class UserInfo as Composite-element within the mapping for User-class. But I dont no how to define the mapping between USERS.ID and USERS.LASTMODIFIEDBY table-fields. Hopefully I decribes my problem clear enough to get some hints. Thanks alot!

    Read the article

  • Rails nested form for belongs_to

    - by user1232533
    I'm new to rails and have some troubles with creating a nested form. My models: class User < ActiveRecord::Base belongs_to :company accepts_nested_attributes_for :company, :reject_if => :all_blank end class Company < ActiveRecord::Base has_many :users end Now i would like to create a new company from the user sign_up page (i use Devise btw) by given only a company name. And have a relation between the new User and new Company. In the console i can create a company for a existing User like this: @company = User.first.build_company(:name => "name of company") @company.save That works, but i can't make this happen for a new user, in my new user sign_up form i tried this (i know its wrong by creating a new User fist but im trying to get something working here..): <%= simple_form_for(resource, :as => resource_name, :html => { :class => 'form-horizontal' }, :url => registration_path(resource_name)) do |f| %> <%= f.error_notification %> <div class="inputs"> <% @user = User.new company = @user.build_company() %> <% f.fields_for company do |builder| %> <%= builder.input :name, :required => true, :autofocus => true %> <% end %> <%= f.input :email, :required => true, :autofocus => true %> <%= f.input :password, :required => true %> <%= f.input :password_confirmation, :required => true %> </div> <div class="form-actions"> <%= f.button :submit, :class => 'btn-primary', :value => 'Sign up' %> </div> I did my best to google for a solution/ example.. found some nested form examples but it's just not clear to me how to do this. Really hope somebody can help me with this. Any help on this would be appreciated. Thanks in advance! Greets, Daniel

    Read the article

  • Spring OpenSessionInViewFilter with @Transactional annotation

    - by Gautam
    This is regarding Spring OpenSessionInViewFilter using with @Transactional annotation at service layer. i went through so many stack overflow post on this but still confused about whether i should use OpenSessionInViewFilter or not to avoid LazyInitializationException It would be great help if somebody help me find out answer to below queries. Is it bad practice to use OpenSessionInViewFilter in application having complex schema. using this filter can cause N+1 problem if we are using OpenSessionInViewFilter does it mean @Transactional not required? Below is my Spring config file <context:component-scan base-package="com.test"/> <context:annotation-config/> <bean id="messageSource" class="org.springframework.context.support.ReloadableResourceBundleMessageSource"> <property name="basename" value="resources/messages" /> <property name="defaultEncoding" value="UTF-8" /> </bean> <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" p:location="/WEB-INF/jdbc.properties" /> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${jdbc.driverClassName}" p:url="${jdbc.databaseurl}" p:username="${jdbc.username}" p:password="${jdbc.password}" /> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="configLocation"> <value>classpath:hibernate.cfg.xml</value> </property> <property name="configurationClass"> <value>org.hibernate.cfg.AnnotationConfiguration</value> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${jdbc.dialect}</prop> <prop key="hibernate.show_sql">true</prop> <!-- <prop key="hibernate.hbm2ddl.auto">create</prop> --> </props> </property> </bean> <tx:annotation-driven /> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean>

    Read the article

  • NSString cannot be released

    - by Stanley
    Consider the following method and the caller code block. The method analyses a NSString and extracts a "http://" string which it passes out by reference as an auto release object. Without releasing g_scan_result, the program works as expected. But according to non-arc rules, g_scan_result should be released since a retain has been called against it. My question are : Why g_scan_result cannot be released ? Is there anything wrong the way g_scan_result is handled in the posted coding below ? Is it safe not to release g_scan_result as long as the program runs correctly and the XCode Memory Leak tool does not show leakage ? Which XCode profile tools should I look into to check and under which subtitle ? Hope somebody knowledgeable could help. - (long) analyse_scan_result :(NSString *)scan_result target_url :(NSString **)targ_url { NSLog (@" RES analyse string : %@", scan_result); NSRange range = [scan_result rangeOfString:@"http://" options:NSCaseInsensitiveSearch]; if (range.location == NSNotFound) { *targ_url = @""; NSLog(@"fnd string not found"); return 0; } NSString *sub_string = [scan_result substringFromIndex : range.location]; range = [sub_string rangeOfString : @" "]; if (range.location != NSNotFound) { sub_string = [sub_string substringToIndex : range.location]; } NSLog(@" FND sub_string = %@", sub_string); *targ_url = sub_string; return [*targ_url length]; } The following is the caller code block, also note that g_scan_result has been declared and initialized (on another source file) as : NSString *g_scan_result = nil; Please do send a comment or answer if you have suggestions or find possible errors in code posted here (or above). Xcode memory tools does not seem to show any memory leak. But it may be because I do not know where to look as am new to the memory tools. { long url_leng = [self analyse_scan_result:result target_url:&targ_url]; NSLog(@" TAR target_url = %@", targ_url); UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Scanned Result" message:result delegate:g_alert_view_delegate cancelButtonTitle:@"OK" otherButtonTitles:nil]; if (url_leng) { // ****** The 3 commented off statements // ****** cannot be added without causing // ****** a crash after a few scan result // ****** cycles. // ****** NSString *t_url; if (g_system_status.language_code == 0) [alert addButtonWithTitle : @"Open"]; else if (g_system_status.language_code == 1) [alert addButtonWithTitle : @"Abrir"]; else [alert addButtonWithTitle : @"Open"]; // ****** t_url = g_scan_result; g_scan_result = [targ_url retain]; // ****** [t_url release]; } targ_url = nil; [alert show]; [alert release]; [NSTimer scheduledTimerWithTimeInterval:5.0 target:self selector:@selector(activate_qr_scanner:) userInfo:nil repeats:NO ]; return; }

    Read the article

  • Unfortunately App stopped when destroying SupportMapFragment

    - by user1408341
    I have the following problem. I have three fragments which are hosted in a TabHost. When I'm working with the app everything works fine. Now I like to end the app when the user hits the back button. Instead of terminating without errors I get the message Unfortunately App stopped. Then I said to myself something is wrong with the onDestroy() method of the FragmentActivity or with the onDestroyView method() of the Fragment. The problem is I cannot debug the point where the app crash. I get only the error:Fatal signal 11 (SIGSEGV). I then removed one by one each Fragment to identify which fragment causes the error. I could identify the fragment that I named BasicMapFragment. Something is wrong there. The code: public class BasicMapFragment extends SupportMapFragment implements LocationListener { @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View view = super.onCreateView(inflater, container, savedInstanceState); //removeAllMarkers(); //setupGps(); //setupMap(); //setupMarkersFromModel(); //registerListeners(); return view; } } I commented out all my self written code to isolate the place where the error occurs. @Override public void onDestroy() { Log.d("ch.xxx.fragment.BasiceMapFragment", "On destroy called"); super.onDestroy(); } public void onDestroyView() { Log.d("ch.xxx.fragment.BasiceMapFragment", "On destroy view called"); super.onDestroyView(); } When I press the back button now the onDestroy() method of my FragmentActivity is first called like expected. Then the onDestroyView method is called on my BasicMapFragment class. At the end the onDestroy method is called - and then the application crash. Here is my layout file: <?xml version="1.0" encoding="utf-8"?> <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent"> <fragment android:id="@+id/map" android:layout_width="match_parent" android:layout_height="match_parent" class="com.google.android.gms.maps.SupportMapFragment"/> </FrameLayout> Resume: - Map is showed - I can work with the app. - When I leave out the BasicMapFragment the app finish without error. - When I add the BasicMapFragment the app returns an error when I press the back button Is there something that I have forgot to implement? Have somebody had the same trouble?

    Read the article

  • Correct use of a "for...in" loop in javascript?

    - by jnkrois
    Hello everybody, before I ask my question I wanted to let everybody know that I appreciate the fact that there's always somebody out there willing to help, and on my end I'll try to give back to the community as much as I can. Thanks Now, I would like to get some pointers as to how to properly take advantage of the "for...in" loop in JavaScript, I already did some research and tried a couple things but it is still not clear to me how to properly use it. Let's say I have a random number of "select" tags in an HTML form, and I don't require the user to select an option for all of them, they can leave some untouched if they want. However I need to know if they selected none or at least one. The way I'm trying to find out if the user selected any of them is by using the "for...in" loop. For example: var allSelected = $("select option:selected"); var totalSelected = $("select option:selected").length; The first variable produces an array of all the selected options. The second variable tells me how many selected options I have in the form (select tags could be more than one and it changes every time). Now, in order to see if any has been selected I loop through each element (selected option), and retrieve the "value" attribute. The default "option" tag has a value="0", so if any selected option returns a value greater than 0, I know at least one option has been selected, however it does not have to be in order, this is my loop so far: for(var i = 0; i < totalSelected; i++){ var eachOption = $(allSelected[i]).val(); var defaultValue = 0; if(eachOption == defaultValue){ ...redirect to another page }else if(eachOption > defaultValue){ ... I display an alert } } My problem here is that as soon as the "if" matches a 0 value, it sends the user to the next page without testing the rest of the elements in the array, and the user could have selected the second or third options. What I really want to do is check all the elements in the array and then take the next action, in my mind this is how I could do it, but I'm not getting it right: var randomValue = 25; for(randomValue in allSelected){ var found = true; var notFound = false if(found){ display an alert }else{ redirect to next page } } This loop or the logic I'm using are flawed (I'm pretty sure), what I want to do is test all the elements in the array against a single variable and take the next action accordingly. I hope this makes some sense to you guys, any help would be appreciated. Thanks, JC

    Read the article

  • What's the best way to handle modules that use each other?

    - by Axeman
    What's the best way to handle modules that use each other? Let's say I have a module which has functions for hashes: # Really::Useful::Functions::On::Hash.pm use base qw<Exporter>; use strict; use warnings; use Really::Useful::Functions::On::List qw<transform_list>; our @EXPORT_OK = qw<transform_hash transform_hash_as_list ...>; #... sub transform_hash { ... } #... sub transform_hash_as_list { return transform_list( %{ shift() } ); } #... 1 And another module has been segmented out for lists: # Really::Useful::Functions::On::List.pm use base qw<Exporter>; use strict; use warnings; use Really::Useful::Functions::On::Hash qw<transform_hash>; our @EXPORT_OK = qw<transform_list some_func ...>; #... sub transform_list { ... } #... sub some_func { my %params = transform_hash @_; #... } #... 1 Suppose that enough of these utility functions are handy enough that I'll want to use them in BEGIN statements and import functions to process parameter lists or configuration data. I have been putting sub definitions into BEGIN blocks to make sure they are ready to use whenever somebody includes the module. But I have gotten into hairy race conditions where a definition is not completed in a BEGIN block. I put evolving code idioms into modules so that I can reuse any idiom I find myself coding over and over again. For instance: sub list_if { my $condition = shift; return unless $condition; my $more_args = scalar @_; my $arg_list = @_ > 1 ? \@_ : @_ ? shift : $condition; if (( reftype( $arg_list ) || '' ) eq 'ARRAY' ) { return wantarray ? @$arg_list : $arg_list; } elsif ( $more_args ) { return $arg_list; } return; } captures two idioms that I'm kind of tired of typing: @{ func_I_hope_returns_a_listref() || [] } and ( $condition ? LIST : ()) The more I define functions in BEGIN blocks, the more likely I'll use these idiom bricks to express the logic the more likely that bricks are needed in BEGIN blocks. Do people have standard ways of dealing with this sort of language-idiom-brick model? I've been doing mostly Pure-Perl; will XS alleviate some of this?

    Read the article

  • Chrome is creating duplicate sessions with the same id

    - by dlwiest
    I encountered an issue while I was revising my session library today, and this might be the first time I've ever seen a browser-specific problem on a back end script. I hope somebody can shed some light. Basically how the session library works is: when instantiated, it checks for a cookie called 'id' (in the form of a uniqid result) on the client machine. If a cookie is found, the script checks that and a hashed copy of the user agent string against entries in a session table. If a matching entry is found, the script resumes the session. If no cookie named 'id' is found, or if no matching entry exists in the sessions table, the script creates both. Fairly standard, I think. Now here's the weird part: in Firefox, everything works as predicted. The user gets one session, which he'll always resume upon connection, as long as 24 hours of inactivity has not elapsed. But when I visit the page in Chrome, even though it looks the same and appears to be executing queries in the same order, I see two entries in the session table. The sessions share an agent string, but the ids are different, and timestamp logs indicate that the ghost session is being created shortly (within a second) after the one created for the user. For debugging purposes, I've been printing queries to the screen as they're executed, and this is an example of what I'm seeing when Chrome should be opening one session and is somehow opening two instead: // Attempting to resume a session SELECT id FROM sessions WHERE id = '4fd24a5cd8df12.62439982' AND agent = '9bcd5c6aac911f8bcd938a9563bc4eca' // No result, so it creates a new one INSERT INTO sessions (id, agent, start, last) VALUES ('4fd24ef0347f26.72354606', '9bcd5c6aac911f8bcd938a9563bc4eca', '1339182832', '1339182832') // Clear old sessions DELETE FROM sessions WHERE last < 1339096432 And here's what I'm seeing in the database afterward: id, agent, start, last 4fd24ef0347f26.72354606, 9bcd5c6aac911f8bcd938a9563bc4eca, 1339182832, 1339182832 4fd24ef0857f94.72251285, 9bcd5c6aac911f8bcd938a9563bc4eca, 1339182833, 1339182833 Am I missing something obvious? The only thing I can think of is that Chrome might be creating a hidden session in the background, possibly to crawl the page. If that's the case though, it could become a problem later, when I begin associating active sessions with entries in the users table. I've been looking for possible bugs in my script, but I haven't found anything so far, and everything works as expected in Firefox.

    Read the article

  • How can I scale movement physics functions to frames per second (in a game engine)?

    - by Richard
    I am working on a game in Javascript (HTML5 Canvas). I implemented a simple algorithm that allows an object to follow another object with basic physics mixed in (a force vector to drive the object in the right direction, and the velocity stacks momentum, but is slowed by a constant drag force). At the moment, I set it up as a rectangle following the mouse (x, y) coordinates. Here's the code: // rectangle x, y position var x = 400; // starting x position var y = 250; // starting y position var FPS = 60; // frames per second of the screen // physics variables: var velX = 0; // initial velocity at 0 (not moving) var velY = 0; // not moving var drag = 0.92; // drag force reduces velocity by 8% per frame var force = 0.35; // overall force applied to move the rectangle var angle = 0; // angle in which to move // called every frame (at 60 frames per second): function update(){ // calculate distance between mouse and rectangle var dx = mouseX - x; var dy = mouseY - y; // calculate angle between mouse and rectangle var angle = Math.atan(dy/dx); if(dx < 0) angle += Math.PI; else if(dy < 0) angle += 2*Math.PI; // calculate the force (on or off, depending on user input) var curForce; if(keys[32]) // SPACE bar curForce = force; // if pressed, use 0.35 as force else curForce = 0; // otherwise, force is 0 // increment velocty by the force, and scaled by drag for x and y velX += curForce * Math.cos(angle); velX *= drag; velY += curForce * Math.sin(angle); velY *= drag; // update x and y by their velocities x += velX; y += velY; And that works fine at 60 frames per second. Now, the tricky part: my question is, if I change this to a different framerate (say, 30 FPS), how can I modify the force and drag values to keep the movement constant? That is, right now my rectangle (whose position is dictated by the x and y variables) moves at a maximum speed of about 4 pixels per second, and accelerates to its max speed in about 1 second. BUT, if I change the framerate, it moves slower (e.g. 30 FPS accelerates to only 2 pixels per frame). So, how can I create an equation that takes FPS (frames per second) as input, and spits out correct "drag" and "force" values that will behave the same way in real time? I know it's a heavy question, but perhaps somebody with game design experience, or knowledge of programming physics can help. Thank you for your efforts. jsFiddle: http://jsfiddle.net/BadDB

    Read the article

  • Nagios Creating lots of zombie process

    - by pradeepchhetri
    In my monitoring box, I have lots of zombie process created by nagios and they gets remove quickly also. I am using active checks to perform monitoring of my servers. I accumulated the defunct processes created using the following command: $ top -d 0.25 -b -n 20 > topout.txt This collected the output of top with 0.25s delay 20 times. I did grep on the topout.txt for the defunct process. $ cat topout.txt | grep defunct I get the following output. 8957 nagios 20 0 0 0 0 Z 6.0 0.0 0:00.02 nagios <defunct> 8951 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 8954 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 8945 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 8946 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 8980 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9000 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.00 nagios <defunct> 9024 nagios 20 0 0 0 0 Z 7.0 0.0 0:00.02 nagios <defunct> 9025 nagios 20 0 0 0 0 Z 3.5 0.0 0:00.01 nagios <defunct> 9040 nagios 20 0 0 0 0 Z 3.1 0.0 0:00.01 nagios <defunct> 9086 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9087 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9123 nagios 20 0 0 0 0 Z 6.1 0.0 0:00.02 nagios <defunct> 9126 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 9131 nagios 20 0 0 0 0 Z 3.0 0.0 0:00.01 nagios <defunct> 9091 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.05 nagios <defunct> 9111 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9119 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9118 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9151 nagios 20 0 0 0 0 Z 2.9 0.0 0:00.02 nagios <defunct> 9153 nagios 20 0 0 0 0 Z 2.9 0.0 0:00.02 nagios <defunct> 9150 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9164 nagios 20 0 0 0 0 Z 3.5 0.0 0:00.02 nagios <defunct> 9171 nagios 20 0 0 0 0 Z 3.5 0.0 0:00.02 nagios <defunct> 9154 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9156 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9163 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9167 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9178 nagios 20 0 0 0 0 Z 3.8 0.0 0:00.02 nagios <defunct> 9174 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9179 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> 9182 nagios 20 0 0 0 0 Z 0.0 0.0 0:00.01 nagios <defunct> Can somebody help me in finding out the reason of these zombie processes and how i can prevent these zombie processes ?

    Read the article

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • Exchange 2010 OWA - a few questions about using multiple mailboxes

    - by Alexey Smolik
    We have an Exchange 2010 SP2 deployment and we need that our users could access multiple mailboxes in OWA. The problem is that a user (eg John Smith) needs to access not just somebody else's (eg Tom Anderson) mailboxes, but his OWN mailboxes, e.g. in different domains: [email protected], [email protected], [email protected], etc. Of course it is preferable for the user to work with all of his mailboxes from a single window. Such mailboxes can be added as multiple Exchange accounts in Outlook, that works almost fine. But in OWA, there are problems: 1) In the left pane - as I've learned - we can open only Inbox folders from other mailboxes. No way to view all folders like in Outlook? 2) With Send-As permissions set, when trying to send a message from another address, that message is saved in the Sent Items folder of the mailbox that is opened in OWA, and not in the mailbox the message is sent from. The same thing with the trash can. Is there a way to fix that? Also, this problem exists in desktop Outlook when mailboxes are added automatically via the Auto Mapping feature, so that we need to turn it off and add the accounts manually. Is there a simpler workaround? 3) Okay, suppose we only open Inbox folders in the left pane. The problem is that the mailbox names shown there are formed from Display Name attributes. But those names are all identical! All the mailboxes are owned by John Smith, so they should be all named John Smith - so that letter recepient sees "John Smith" in the "from" field, no matter what mailbox it is sent from. Also, the user knows what's his name - no need to tell him. He wants to know what mailbox he works with. So we need a way to either: a) customize OWA to show mailbox email address instead of user Display Name, or b) make Exchange use another attribute to put in the "from" field when sending letters 4) Okay, we can switch between mailboxes using "Open Other Mailbox" in the upper-right corner menu. But: a) To select a mailbox we need to enter its name (or first letters). It there a way to show a list of links to mailboxes the user has full access to? Eg in the page header... b) If we start entering the first letters, we see a popup list with possible mailboxes to be opened. But there are all mailboxes (apparently from GAL), not only mailboxes the user has permission to open! How to filter that popup list? c) The same problem as in (3) with mailbox naming. We can see the opened mailbox email address ONLY in the page URL, which is insufficient for many users. In the left pane we see "John Smith" which is useless. 5) Each mailbox is tied with a separate user in AD. If one has several mailboxes, we need to have additional dummy AD accounts, create additional OUs to store them, etc. That's not very nice, is there any standartized, optimal way to build such a structure? We would really appreciate any answers or additional info for any of these questions. Thank you in advance.

    Read the article

  • Setting up RADIUS + LDAP for WPA2 on Ubuntu

    - by Morten Siebuhr
    I'm setting up a wireless network for ~150 users. In short, I'm looking for a guide to set RADIUS server to authenticate WPA2 against a LDAP. On Ubuntu. I got a working LDAP, but as it is not in production use, it can very easily be adapted to whatever changes this project may require. I've been looking at FreeRADIUS, but any RADIUS server will do. We got a separate physical network just for WiFi, so not too many worries about security on that front. Our AP's are HP's low end enterprise stuff - they seem to support whatever you can think of. All Ubuntu Server, baby! And the bad news: I now somebody less knowledgeable than me will eventually take over administration, so the setup has to be as "trivial" as possible. So far, our setup is based only on software from the Ubuntu repositories, with exception of our LDAP administration web application and a few small special scripts. So no "fetch package X, untar, ./configure"-things if avoidable. UPDATE 2009-08-18: While I found several useful resources, there is one serious obstacle: Ignoring EAP-Type/tls because we do not have OpenSSL support. Ignoring EAP-Type/ttls because we do not have OpenSSL support. Ignoring EAP-Type/peap because we do not have OpenSSL support. Basically the Ubuntu version of FreeRADIUS does not support SSL (bug 183840), which makes all the secure EAP-types useless. Bummer. But some useful documentation for anybody interested: http://vuksan.com/linux/dot1x/802-1x-LDAP.html http://tldp.org/HOWTO/html_single/8021X-HOWTO/#confradius UPDATE 2009-08-19: I ended up compiling my own FreeRADIUS package yesterday evening - there's a really good recipe at http://www.linuxinsight.com/building-debian-freeradius-package-with-eap-tls-ttls-peap-support.html (See the comments to the post for updated instructions). I got a certificate from http://CACert.org (you should probably get a "real" cert if possible) Then I followed the instructions at http://vuksan.com/linux/dot1x/802-1x-LDAP.html. This links to http://tldp.org/HOWTO/html_single/8021X-HOWTO/, which is a very worthwhile read if you want to know how WiFi security works. UPDATE 2009-08-27: After following the above guide, I've managed to get FreeRADIUS to talk to LDAP: I've created a test user in LDAP, with the password mr2Yx36M - this gives an LDAP entry roughly of: uid: testuser sambaLMPassword: CF3D6F8A92967E0FE72C57EF50F76A05 sambaNTPassword: DA44187ECA97B7C14A22F29F52BEBD90 userPassword: {SSHA}Z0SwaKO5tuGxgxtceRDjiDGFy6bRL6ja When using radtest, I can connect fine: > radtest testuser "mr2Yx36N" sbhr.dk 0 radius-private-password Sending Access-Request of id 215 to 130.225.235.6 port 1812 User-Name = "msiebuhr" User-Password = "mr2Yx36N" NAS-IP-Address = 127.0.1.1 NAS-Port = 0 rad_recv: Access-Accept packet from host 130.225.235.6 port 1812, id=215, length=20 > But when I try through the AP, it doesn't fly - while it does confirm that it figures out the NT and LM passwords: ... rlm_ldap: sambaNTPassword -> NT-Password == 0x4441343431383745434139374237433134413232463239463532424542443930 rlm_ldap: sambaLMPassword -> LM-Password == 0x4346334436463841393239363745304645373243353745463530463736413035 [ldap] looking for reply items in directory... WARNING: No "known good" password was found in LDAP. Are you sure that the user is configured correctly? [ldap] user testuser authorized to use remote access rlm_ldap: ldap_release_conn: Release Id: 0 ++[ldap] returns ok ++[expiration] returns noop ++[logintime] returns noop [pap] Normalizing NT-Password from hex encoding [pap] Normalizing LM-Password from hex encoding ... It is clear that the NT and LM passwords differ from the above, yet the message [ldap] user testuser authorized to use remote access - and the user is later rejected...

    Read the article

  • Ubuntu 12 crashed and took down network

    - by Leopd
    We recently set up a new Ubuntu 12.04LTS server on our network. It's not fully configured so it's not doing much beyond sshd and a default apache2 install. But this evening it appears to have crashed. It wasn't responding to the network or the keyboard. But the worst part is, it took down the entire network. My knowledge of the network stack below OSI layer 3 is very limited, so the rest confuses me. When this machine was physically connected to the network, no other machine could connect to the outside internet. When things were broken, running arp showed that our gateway's IP address (10.0.1.1) was listed as "invalid." Unplugging the server from the network fixed the problem, and plugging it back in broke it again. So the crashed server was advertising itself as owning the gateway's IP address? There's nothing at all in syslog during the time when it was causing problems. Any ideas about how to figure out what went wrong or what we can do to prevent it from happening again? I'm hesitant to even put the machine back on the network right now. Update ** It crashed again, and I ran tcpdump -penn arp (thanks bahamat!) for several minutes and got this... (timestamps and duplicate lines removed) 00:1e:65:f8:dc:24 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.1 tell 10.0.2.191, length 46 00:1e:65:f8:dc:24 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.44 tell 10.0.2.191, length 46 60:d8:19:d4:71:d6 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.1 tell 10.0.2.125, length 46 d4:9a:20:04:e9:78 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 192.168.1.1 tell 192.168.1.100, length 28 Update 2 ** When the network is functioning properly, arping -c4 10.0.1.1 returns this: ARPING 10.0.1.1 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=0 time=267.982 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=1 time=422.955 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=2 time=299.215 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=3 time=366.926 usec --- 10.0.1.1 statistics --- 4 packets transmitted, 4 packets received, 0% unanswered (0 extra) When the bad server is plugged in, arping -c4 10.0.1.1 returns: ARPING 10.0.1.1 --- 10.0.1.1 statistics --- 4 packets transmitted, 0 packets received, 100% unanswered (0 extra) Context ** 10.0.x.x is the main subnet. 10.0.1.1 is the main internet gateway 10.0.1.44 is a printer 10.0.2.* devices are all laptops / workstations I have no idea what's using the 192.168.x.x subnet -- your guesses are at least as good as mine. A VM on a workstation? A misconfigured WAP? Somebody re-sharing wifi? A machine that failed to DHCP? The offending ubuntu server's MAC address ends in cd:80 so isn't listed in the dump. It should DHCP to 10.0.3.3 Thanks for any help. This ARP stuff is all voodoo to me. Packets just go to IP addresses, right? ;)

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >