/** \page tstgss Optimizing with a Generating Set Search Method (GSS)
OptGSS is an implementation of a derivative-free algorithm for unconstrained
optimization called Generating Set Search. In GSS,
the next point in the optimization is determined solely by the value of the function
on a set of points around the current point. These search points are generated from a
fixed set of directions, called the generating set, hence the name of the algorithm.
The evaluation of the function on the search points, or search step,
lends itself naturally to a parallel implementation.
In this example, we outline the steps needed to set up GSS,
taking advantage of a parallel computing environment if present.
Further information and examples
can be found in the
Setting up and Solving an Optimization Problem section.
First, include the header files. The MPI header file is included if
and only if the program is compiled with the MPI flag.
\code
#ifdef HAVE_CONFIG_H
#include "OPT++_config.h"
#endif
#include
#include
#ifdef HAVE_STD
#include
#else
#include
#endif
#ifdef WITH_MPI
#include "mpi.h"
#endif
#include "GenSet.h"
#include "OptGSS.h"
#include "NLF.h"
#include "tstfcn.h"
using NEWMAT::ColumnVector;
using std::cerr;
using namespace OPTPP;
\endcode
|
Next, we have a subroutine declaration particular to this example.
The subroutine takes in a string that identifies one of the test problems
defined in tstfcn.h
and returns the objective and initialization functions for that problem.
\code
void SetupTestProblem(string test_id, USERFCN0 *test_problem, INITFCN *init_problem);
\endcode
|
After an argument check, initialize MPI. This does not need to be
done within an "ifdef", but if you want the option of also building a
serial version of your problem, then it should be. (Note: An argument
check is used here because this example is set up to work with
multiple problems. Such a check is not required by OPT++.)
\code
int main (int argc, char* argv[])
{
if (argc != 3) {
cout << "Usage: tstgss problem_name ndim\n";
exit(1);
}
#ifdef WITH_MPI
int me;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &me);
#endif
\endcode
|
Define the variables.
\code
string test_id = argv[1];
int ndim = atoi(argv[2]);
ColumnVector x(ndim);
SetupTestProblem(test_id, &test_problem, &init_problem);
\endcode
|
Now set up the output file. If you are running in parallel, you may
want to designate an output file for each processor. Otherwise, the
output from all of the processors will be indiscriminantly intertwined
in a single file. If the function evaluation does any file I/O, you
should set up a working directory for each processor and then have the
each process chdir (or something comparable) into its corresponding
directory. Each working directory should have a copy of the input
file(s) needed by the function evaluation. If the function evaluation
requires file I/O and working directories are not used, the function
evaluation will not work properly.
\code
char outfname[20];
sprintf(outfname,"%s.out",test_id);
#ifdef WITH_MPI
sprintf(outfname,"%s.%d", outfname, me);
#endif
\endcode
|
Next construct a nonlinear problem object of the given dimension,
with the functions obtained from the SetupTestProblem routine.
\code
NLF0 nlp(ndim, test_problem, init_problem);
\endcode
|
Create a GSS algorithm object with a nonlinear problem and standard generating set object of dimension dim .
\code
GenSetStd gs(ndim);
OptGSS optobj(&nlp, &gs);
\endcode
|
After constructing the GSS algorithm object, we can adjust the algorithm's
default parameters, starting with the output file name.
The next three parameters common to all OPT++ algorithms.
The last parameter is specific to OptGSS:
if "full-search" is true, the algorithm evaluates all directions
around the current point
searching for the point with greatest function decrease;
otherwise the current search-step stops once a point of
sufficient decrease is found.
\code
optobj.setOutputFile(outfname);
optobj.setMaxIter(1000);
optobj.setMaxFeval(10000);
optobj.setFcnTol(1e-9);
optobj.setFullSearch(true);
\endcode
|
Optimize and clean up.
\code
optobj.optimize();
optobj.printStatus("Final Status:");
optobj.cleanup();
\endcode
|
Finally, shutdown MPI.
\code
#ifdef WITH_MPI
MPI_Finalize();
#endif
}
\endcode
|
Next Section: Optimization Methods | Back to Main Page
Last revised September 14, 2006
*/